WidebandAmplifiers PDF
WidebandAmplifiers PDF
WidebandAmplifiers PDF
Wideband Amplifiers
by
PETER STARI
Joef Stefan Institute,
Ljubljana, Slovenia
and
ERIK MARGAN
Joef Stefan Institute,
Ljubljana, Slovenia
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN-10
ISBN-13
ISBN-10
ISBN-13
0-387-28340-4 (HB)
978-0-387-28340-1 (HB)
0-387-28341-2 (e-book)
978-0-387-28341-8 (e-book)
Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.
www.springer.com
We dedicate this book to all our friends and colleagues in the art of electronics.
Table of Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
3
5
91
95
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
P.Stari, E.Margan
Wideband Amplifiers
Acknowledgments
The authors are grateful to John Addis, Carl Battjes, Dennis Feucht, Bruce
Hoffer and Bob Ross, all former employees Tektronix, Inc., for allowing us to use
their class notes, ideas, and publications, and for their help when we had run into
problems concerning some specific circuits.
We are also thankful to prof. Ivan Vidav of the Faculty of Mathematics and
Physics in Ljubljana for his help and reviewing of Part 1, and to Csaba Szathmary, a
former employee of EMG, Budapest, for allowing us to use some of his measurement
results in Part 5.
However, if, in spite of meticulously reviewing the text, we have overlooked
some errors, this, of course, is our own responsibility alone; we shall be grateful to
everyone for bringing such errors to our attention, so that they can be corrected in the
next edition. To report the errors please use one of the e-mail addresses below.
Peter Stari & Erik Margan
peter.staric@guest.arnes.si
erik.margan@ijs.si
- IX -
P.Stari, E.Margan
Wideband Amplifiers
Foreword
With the exception of the tragedy on September 11, the year 2001 was
relatively normal and uneventful: remember, this should have been the year of the
Clarkes and Kubricks Space Odyssey, mission to Juiter; it should have been the year
of the HAL-9000 computer.
Today, the Personal Computer is as ubiquitous and omnipresent as was HAL
on the Discovery spaceship. And the rate of technology development and market
growth in electronics industry still follows the famous Moore Law, almost four
decades after it has been first formulated: in 1965, Gordon Moore of Intel Corporation
predicted the doubling of the number of transistors on a chip every 2 years, corrected
to 18 months in 1967; at that time, the landing on the Moon was in full preparation.
Curiously enough, today noone cares to go to the Moon again, let alone
Jupiter. And, in spite of all the effort in digital engineering, we still do not have
anything close to 0.1% of the HAL capacity (fortunately?!). Whilst there are many
research labs striving to put artificial intelligence into a computer, there are also
rumors that this has already happened (with Windows-95, of course!).
In the early 1990s it was felt that digital electronics will eventually render
analog systems obsolete. This never happened. Not only is the analog sector vital as
ever, the job market demands are expanding in all fields, from high-speed
measurement instrumentation and data acquisition, telecommunications and radio
frequency engineering, high-quality audio and video, to grounding and shielding,
electromagnetic interference suppression and low-noise printed circuit board design,
to name a few. And it looks like this demand will be going on for decades to come.
But whilst the proliferation of digital systems attracted a relatively high
number of hardware and software engineers, analog engineers are still rare birds. So,
for creative young people, who want to push the envelope, there are lots of
opportunities in the analog field.
However, analog electronics did not earn its Black-Magic Art attribute in
vain. If you have ever experienced the problems and frustrations from circuits found
in too many cook-books and sure-working schemes in electronics magazines, and
if you became tired of performing exorcism on every circuit you build, then it is
probably the time to try a different way: in our own experience, the hard way of
doing the correct math first often turns out to be the easy way!
Here is the book Wideband Amplifiers. The book is intended to serve both
as a design manual to more experienced engineers, as well as a good learning guide to
beginners. It should help you to improve your analog designs, making better and faster
amplifier circuits, especially if time-domain performance is of major concern. We
have strieved to provide the complete math for every design stage. And, to make
learning a joyful experience, we explain the derivation of important math relations
from a design engineer point of view, in an intuitive and self-evident manner (rigorous
mathematicians might not like our approach). We have included many practical
applications, schematics, performance plots, and a number of computer routines.
- XI -
P.Stari, E.Margan
Wideband Amplifiers
However, as it is with any interesting subject, the greatest problem was never
what to include, but rather what to leave out!
In the foreword of his popular book A Brief History of Time, Steven
Hawking wrote that his publisher warned him not to include any math, since the
number of readers would be halved by each formula. So he included the I 7 - #
and bravely cut out one half of the world population.
We went further: there are some 220 formulae in Part 1 only. By estimating
the current world population to some 6109 , of which 0.01% could be electronics
engineers and assuming an average lifetime interest in the subject of, say, 30 years, if
the publishers rule holds, there ought to be one reader of our book once every:
2220 a6 109 104 30 356 24 3600b 3 1051 seconds
or something like 6.61033 the total age of the Universe!
Now, whatever you might think of it, this book is not about math! It is about
getting your design to run right first time! Be warned, though, that it will be not
enough to just read the book. To have any value, a theory must be put into practice.
Although there is no theoretical substitute for hands-on experience, this book should
help you to significantly shorten the trial-and-error phase.
We hope that by studying this book thoroughly you will find yourself at the
beginning of a wonderful journey!
Peter Stari and Erik Margan,
Ljubljana, June 2003
Important Note:
We would like to reassure the Concerned Environmentalists
that during the writing of this book, no animal or plant had suffered
any harm whatsoever, either in direct or indirect form (excluding the
authors, one computer mouse and countless computation bugs!).
- XII -
P.Stari, E.Margan
Wideband Amplifiers
Release Notes
The manuscript of this book appeared first in spring of 1988.
Since then, the text has been revised sveral times, with some minor errors
corrected and figures redrawn, in particular in Part 2, where inductive peaking
networks are analyzed. Several topics have been updated to reflect the latest
developments in the field, mainly in Part 5, dealing with modern high-speed circuits.
The Part 6, where a number of computer algorithms are developed, and Part 7,
containing several algorithm application examples, were also brought up to date.
This is a release version 3 of the book.
The book also commes in the Adobe Portable Document Format (PDF),
readable by the Adobe Acrobat Reader program (the latest version can be
downloaded free of charge from http://www.adobe.com/products/Acrobat/ ).
One of the advantages of the book, offered by the PDF format and the Reader
program, are numerous links (blue underlined text), which enable easy access to
related topics by pointing the mouse cursor on the link and clicking the left mouse
button. Returning to the original reading position is possible by clicking the right
mouse button and select Go Back from the pop-up menu (see the AR HELP
menu). There are also numerous highlights (green underlined text) relating to the
content within the same page.
The cross-file links (red underlined text) relate to the contents in different PDF
files, which open by clicking the link in the same way.
The Internet and World-Wide-Web links are in violet (dark magenta) and are
accessed by opening the default browser installed on your computer.
The book was written and edited using , the Scientific Word Processor,
version 5.0, (made by Simon L. Smith, see http://www.expswp.com/ ).
The computer algorithms developed and described in Part 6 and 7 are intended
as tools for the amplifier design process. Written for Matlab, the Language of
Technical Computing (The MathWorks, Inc., http://www.mathworks.com/), they have
all been revised to conform with the newer versions of Matlab (version 5.3 for
Students), but still retaining downward compatibility (to version 1) as much as
possible. The files can be found on the CD in the Matlab folder as *.M files, along
with the information of how to install them and use within the Matlab program. We
have used Matlab to check all the calculations and draw most of the figures. Before
importing them into , the figures were finalized using the Adobe Illustrator,
version 8 (see http://www.adobe.com/products/Illustrator/ ).
All circuit designs were checked using Micro-CAP , the Microcomputer
Circuit Analysis Program, v. 5 (Spectrum Software, http://www.spectrum-soft.com/ ).
Some of the circuits described in the book can be found on the CD in the MicroCAP
folder as *.CIR files, which the readers with access to the MicroCAP program can
import and run the simmulations by themselves.
- XIII -
P. Stari, E. Margan
Wideband Amplifiers
Part 1
P. Stari, E.Margan
About Transforms
The Laplace transform can be used as a powerful method of solving
linear differential equations. By using a time domain integration to obtain
the frequency domain transfer function and a frequency domain integration
to obtain the time domain response, we are relieved of a few nuisances of
differential equations, such as defining boundary conditions, not to speak of
the difficulties of solving high order systems of equations.
Although Laplace had already used integrals of exponential functions
for this purpose at the beginning of the 19th century, the method we now
attribute to him was effectively developed some 100 years later in
Heavisides operational calculus.
The method is applicable to a variety of physical systems (and even
some non physical ones, too! ) involving trasport of energy, storage and
transform, but we are going to use it in a relatively narrow field of
calculating the time domain response of amplifier filter systems, starting
from a known frequency domain transfer function.
As for any tool, the transform tools, be they Fourier, Laplace,
Hilbert, etc., have their limitations. Since the parameters of electronic
systems can vary over the widest of ranges, it is important to be aware of
these limitations in order to use the transform tool correctly.
-1.2-
P. Stari, E.Margan
-4_
-4_
-1.3-
P. Stari, E.Margan
List of Tables:
Table 1.2.1: Square Wave Fourier Components ...........................................................................................
Table 1.5.1: Ten Laplace Transform Examples ............................................................................................
Table 1.6.1: Laplace Transform Properties ..................................................................................................
Table 1.8.1: Differences Between Real and Complex Line Integrals ...........................................................
1.15
1.30
1.39
1.48
List of Figures:
Fig. 1.1.1: Sine wave in three ways ................................................................................................................. 1.7
Fig. 1.1.2: Amplifier overdrive harmonics ...................................................................................................... 1.9
Fig. 1.1.3: Complex phasors ........................................................................................................................... 1.9
Fig. 1.2.1: Square wave and its phasors ........................................................................................................ 1.11
Fig. 1.2.2: Square wave phasors rotating ...................................................................................................... 1.12
Fig. 1.2.3: Waveform with and without DC component ............................................................................... 1.13
Fig. 1.2.4: Integration of rotating and stationary phasors ............................................................................. 1.14
Fig. 1.2.5: Square wave signal definition ...................................................................................................... 1.14
Fig. 1.2.6: Square wave frequency spectrum ................................................................................................ 1.14
Fig. 1.2.7: Gibbs phenomenon .................................................................................................................... 1.16
Fig. 1.2.8: Periodic waveform example ........................................................................................................ 1.16
Fig. 1.3.1: Square wave with extended period .............................................................................................. 1.17
Fig. 1.3.2: Complex spectrum of the timely spaced sqare wave ................................................................... 1.17
Fig. 1.3.3: Complex spectrum of the square pulse with infinite period ......................................................... 1.20
Fig. 1.3.4: Periodic and aperiodic functions ................................................................................................. 1.21
Fig. 1.4.1: The abscissa of absolute convergence ......................................................................................... 1.24
Fig. 1.5.1: Unit step function ........................................................................................................................ 1.25
Fig. 1.5.2: Unit step delayed ......................................................................................................................... 1.25
Fig. 1.5.3: Exponential function ................................................................................................................... 1.26
Fig. 1.5.4: Sine function ............................................................................................................................... 1.26
Fig. 1.5.5: Cosine function ............................................................................................................................ 1.27
Fig. 1.5.6: Damped oscillations .................................................................................................................... 1.27
Fig. 1.5.7: Linear ramp function ................................................................................................................... 1.28
Fig. 1.5.8: Power function ............................................................................................................................ 1.28
Fig. 1.5.9: Composite linear and exponential function ................................................................................. 1.30
Fig. 1.5.10: Composite power and exponential function .............................................................................. 1.30
Fig. 1.6.1: The Dirac impulse function ......................................................................................................... 1.35
Fig. 1.7.1: Instantaneous voltage on P, G and V .......................................................................................... 1.41
Fig. 1.7.2: Step response of an VG -network ................................................................................................ 1.43
Fig. 1.8.1: Integral of a real inverting function ............................................................................................. 1.45
Fig. 1.8.2: Integral of a complex inverting function ..................................................................................... 1.47
Fig. 1.8.3: Different integration paths of equal result ................................................................................... 1.49
Fig. 1.8.4: Similar integration paths of different result ................................................................................. 1.49
Fig. 1.8.5: Integration paths about a pole ..................................................................................................... 1.51
Fig. 1.8.6: Integration paths near a pole ....................................................................................................... 1.51
Fig. 1.8.7: Arbitrary integration paths .......................................................................................................... 1.51
Fig. 1.8.8: Integration path encircling a pole ................................................................................................ 1.51
Fig. 1.9.1: Contour integration path around a pole ....................................................................................... 1.53
Fig. 1.9.2: Contour integration not including a pole ..................................................................................... 1.53
Fig. 1.10.1: Cauchys method of expressing analytical functions ................................................................. 1.55
Fig. 1.12.1: Emmentaler cheese .................................................................................................................... 1.65
Fig. 1.12.2: Integration path encircling many poles ...................................................................................... 1.65
Fig. 1.13.1: Complex line integration of a complex function ....................................................................... 1.67
Fig. 1.13.2: Integration path of Fig.1.13.1 .................................................................................................... 1.67
Fig. 1.13.3: Integral area is smaller than QP ............................................................................................... 1.67
Fig. 1.13.4: Cartesian and polar representation of complex numbers ........................................................... 1.68
Fig. 1.13.5: Integration path for proving of the Laplace transform ............................................................... 1.69
Fig. 1.13.6: Integration path for proving of input functions .......................................................................... 1.71
Fig. 1.14.1: V P G circuit driven by a current step ....................................................................................... 1.73
Fig. 1.14.2: V P G circuit transfer function magnitude ................................................................................ 1.75
Fig. 1.14.3: V P G circuit in time domain .................................................................................................... 1.79
Fig. 1.15.1: Convolution of two functions .................................................................................................... 1.82
Fig. 1.15.2: System response calculus in time and frequency domain .......................................................... 1.83
-1.4-
P. Stari, E.Margan
1.0 Introduction
With the advent of television and radar during the Second World War, the behavior
of wideband amplifiers in the time domain has become very important [Ref. 1.1]. In todays
digital world this is even more the case. It is a paradox that designers and troubleshooters of
digital equipment still depend on oscilloscopes, which at least in their fast and low level
input part consist of analog wideband amplifiers. So the calculation of the time domain
response of wideband amplifiers has become even more important than the frequency,
phase, and time delay response.
The emphasis of this book is on the amplifiers time domain response. Therefore a
thorough knowledge of time related calculus, explained in Part 1, is a necessary
pre-requisite for understanding all other parts of this book where wideband amplifier
networks are discussed.
The time domain response of an amplifier can be calculated by two main methods:
The first one is based on differential equations and the second uses the inverse Laplace
transform (_" transform). The differential equation method requires the calculation of
boundary conditions, which in case of high order equations means an unpleasant and
time consuming job. Another method, which also uses differential equations, is the so
called state variable calculation, in which a differential equation of order 8 is split into 8
differential equations of the first order, in order to simplify the calculations. The state
variable method also allows the calculation of non linear differential equations. We will use
neither of these, for the simple reason that the Laplace transform and its inverse are based
on the system poles and zeros, which prove so useful for network calculations in the
frequency domain in the later parts of the book. So most of the data which are calculated
there is used further in the time domain analysis, thus saving a great deal of work. Also the
use of the _" transform does not require the calculation of boundary conditions, giving the
result directly in the time domain.
In using the _" transform most engineers depend on tables. Their method consists
firstly of splitting the amplifier transfer function into partial fractions and then looking for
the corresponding time domain functions in the _ transform tables. The sum of all these
functions (as derived from partial fractions) is then the result. The difficulty arises when no
corresponding function can be found in the tables, or even at an earlier stage, if the
mathematical knowledge available is insufficient to transform the partial fractions into such
a form as to correspond to the formulae in the tables.
In our opinion an amplifier designer should be self-sufficient in calculating the time
domain response of a wideband amplifier. Fortunately, this can be almost always derived
from simple rational functions and it is relatively easy to learn the _" transforms for such
cases. In Part 1 we show how this is done generally, as well as for a few simple examples.
A great deal of effort has been spent on illustrating the less clear relationships by relevant
figures. Since engineers seek to obtain a first glance insight of their subject of study, we
believe this approach will be helpful.
This part consists of four main sections. In the first, the concept of harmonic (e.g.,
sinusoidal) functions, expressed by pairs of counter-rotating complex conjugate phasors, is
explained. Then the Fourier series of periodic waveforms are discussed to obtain the
discrete spectra of periodic waveforms. This is followed by the Fourier integral to obtain
continuous spectra of non-repetitive waveforms. The convergence problem of the Fourier
-1.5-
P. Stari, E.Margan
integral is solved by introducing the complex frequency variable = 5 4=, thus coming
to direct Laplace transform (_ transform).
The second section shows some examples of the _ transforms. The results are
useful when we seek the inverse transforms of simple functions.
The third section deals with the theory of functions of complex variables, but only
to the extent that is needed for understanding the inverse Laplace transform. Here the line
and contour integrals (Cauchy integrals), the theory of residues, the Laurent series and the
_" transform of rational functions are discussed. The existence of the _" transform for
rational functions is proved by means of the Cauchy integral.
Finally, the concluding section deals with some aspects of the _" transforms and
the convolution integral. Only two standard problems of the _" transform are shown,
because all the transient response calculations (by means of the contour integration and the
theory of residues) of amplifier networks, presented in Parts 25, give enough examples and
help to acquire the necessary know-how.
It is probably impossible to discuss Laplace transform in a manner which would
satisfy both engineers and mathematicians. Professor Ivan Vidav said: If we
mathematicians are satisfied, you engineers would not be, and vice versa. Here we have
tried to achieve the best possible compromise: to satisfy electronics engineers and at the
same time not to offend the mathematicians. But, as late colleague, the physicist Marko
Kogoj, used to say: Engineers never know enough of mathematics; only mathematicians
know their science to the extent which is satisfactory for an engineer, but they hardly ever
know what to do with it! Thus successful engineers keep improving their general
knowledge of mathematics far beyond the text presented here.
After studying this part the readers will have enough knowledge to understand all
the time domain calculations in the subsequent parts of the book. In addition, the readers
will acquire the basic knowledge needed to do the time-domain calculations by themselves
and so become independent of _ transform tables. Of course, in order to save time, they
will undoubtedly still use the tables occasionally, or even make tables of their own. But
they will be using them with much more understanding and self-confidence, in comparison
with those who can do _" transform only via the partial fraction expansion and the tables
of basic functions.
Those readers who have already mastered the Laplace transform and its inverse,
can skip this part up to Sec. 1.14, where the _" transform of a two pole network is dealt
with. From there on we discuss the basic examples, which we use later in many parts of the
book; the content of Sec. 1.14 should be understood thoroughly. However, if the reader
notices any substantial gaps in his/her knowledge, it is better to start at the beginning.
In the last two parts of this book, Part 6 and 7, we derive a set of computer
algorithms which reduce the circuits time domain analysis, performance plotting and pole
layout optimization to a pure routine. However attractive this may seem, we nevertheless
recommend the study of Part 1: a good engineer must understand the tools he/she is using in
order to use them effectively.
-1.6-
P. Stari, E.Margan
(1.1.1)
The reason that we have appended the index " to = will become apparent very
soon when we will discuss complex signals containing different frequency components.
The amplitude vs. time relation of this function is shown in Fig. 1.1.1a. This is the most
familiar display seen by using any sine-wave oscillator and an oscilloscope.
b)
A
a
= 1t
x
0
A/2
a)
3
2
a = A sin 1 t
1= t
=0
=
4
c)
A/2
d)
Fig. 1.1.1: Three different presentations of a sine wave: a) amplitude in time domain; b) a phasor
of length E, rotating with angular frequency =" ; c) two complex conjugate phasors of length E#,
rotating in opposite directions with angular frequency =" , at =" > !; d) the same as c), except at
=" > 1%.
(1.1.2)
-1.7-
P. Stari, E.Margan
displayed in a three-dimensional presentation in Fig. 1.1.1c. Here both phasors are shown at
= > ! (or = > #1, %1, ). The sum of both phasors has the instantaneous value +,
which is always real. This is ensured because both phasors rotate with the same angular
frequency =" and =" , starting as shown in Fig. 1.1.1c, and therefore they are always
complex conjugate at any instant. We express + by the well-known Euler formula:
E 4 =" >
e4 =" >
e
#4
(1.1.3)
The 4 in the denominator means that both phasors are imaginary at > !. The sum of both
rotating phasors is then zero, because:
0 !
E 4 =" !
E 4 =" !
e
e
!
#4
#4
(1.1.4)
Both phasors in Fig. 1.1.1c and 1.1.1d are placed on the frequency axis at such a
distance from the origin as to correspond to the frequency =" . Since the phasors rotate
with time the Fig. 1.1.1d, which shows them at : =1 > 1%, helps us to acquire the idea
of a three-dimensional presentation. The understanding of these simple time-frequency
relations, presented in Fig. 1.1.1c and 1.1.1d and expressed by Eq. 1.1.3, is essential for
understanding both the Fourier transform and the Laplace transform.
Eq. 1.1.3 can be changed to the cosine function if the phasor with =" is multiplied
by 4 e4 1# and the phasor with =" by 4 e4 1# . The first multiplication means a
counter-clockwise rotation by *! and the second a clockwise rotation by *!. This causes
both phasors to become real at time > !, their sum again equaling E:
0 >
E 4 =" >
E 4 =" >
E cos =" >
e
e
#4
#4
(1.1.5)
In general a sinusoidal function with a non-zero phase angle : at > ! is expressed as:
E sin = > :
E
e4= >: e4= >:
#4
(1.1.6)
The need to introduce the frequency axis in Fig. 1.1.1c and 1.1.1d will become
apparent in the experiment shown in Fig. 1.1.2. Here we have a unity gain amplifier with a
poor loop gain, driven by a sinewave source with frequency =" and amplitude E" , and
loaded by the resistor VL . If the resistors value is too low and the amplitude of the input
signal is high the amplifier reaches its maximum output current level, and the output signal
0 > becomes distorted (we have purposely kept the same notation E as in the previous
figure, rather than introducing the sign Z for voltage). The distorted output signal contains
not just the original signal with the same fundamental frequency =" , but also a third
harmonic component with the amplitude E$ E" and frequncy =$ $ =" :
0 > E" sin =" > E$ sin $ =" > E" sin =" > E$ sin =$ >
-1.8-
(1.1.7)
P. Stari, E.Margan
Ai
Vi
V1
A1
Vo
Vi
Vo
RL
V3
A3
t=
3 = 31
4 1
Vi =
V1 =
V3 =
Vo =
A i sin 1t
A 1 sin 1t
A 3 sin 3t
V1 + V3
t = 2
1
Fig. 1.1.2: The amplifier is slightly overdriven by a pure sinusoidal signal, Zi , with a frequency ="
and amplitude Ei . The output signal Zo is distorted, and it can be represented as a sum of two
signals, Z" Z$ . The fundamental frequency of Z" is =" and its amplitude E" is somewhat lower.
The frequency of Z$ (the third harmonic component) is =$ $ =" and its amplitude is E$ .
Now let us draw the output signal in the same way as we did in Fig. 1.1.1c,d. Here
we have two pairs of harmonic components: the first pair of phasors E" # rotating with the
fundamental frequency =" , and the second pair E$ # rotating with the third harmonic
frequency =$ , which are three times more distant from the origin than =" . This is shown
in Fig. 1.1.3a, where all four phasors are drawn at time > !. Fig. 1.1.3b shows the phasors
at time > 1% =. Because the third harmonic phasor pair rotates with an angular
frequency three times higher, they rotate up to an angle $1% in the same time.
A1
2
= /4
=0
A3
2
3 1
3 1
1
3 1
A3
2
a)
3 1
A1
2
b)
Fig. 1.1.3: The output signal of the amplifier in Fig. 1.1.2, expressed by two pairs of complex
conjugate phasors: a) at =" > !; b) at =" > 1%.
Mathematically Eq. 1.1.7, according to Fig. 1.1.2 and 1.1.3, can be expressed as:
0 > E" sin =" > E$ sin =$ >
-1.9-
(1.1.8)
P. Stari, E.Margan
The amplifier output obviously cannot exceed either its supply voltage or its
maximum output current. So if we keep increasing the input amplitude the amplifier will
clip the upper and lower peaks of the output waveform (some input protection, as well as
some internal signal source resistance must be assumed if we want the amplifier to survive
in these conditions), thus generating more harmonics. If the input amplitude is very high
and if the amplifier loop gain is high as well, the output voltage 0 > would eventually
approach a square wave shape, such as in Fig. 1.2.1b in the following section. A true
mathematical square wave has an infinite number of harmonics; since no amplifier has an
infinite bandwidth, the number of harmonics in the output voltage of any practical amplifier
will always be finite.
In the next section we are going to examine a generalized harmonic analysis.
-1.10-
P. Stari, E.Margan
(1.2.1)
Consequently the same is true for 0 > 0 > 8 X" , where 8 is an integer. According to
Fourier this square wave can be expressed as a sum of harmonic components with
frequencies 08 8X" . If 8 " we have the fundamental frequency 0" with a phasor
E" #, rotating counter-clockwise. The phasor 0" with the same length E" # E" #
rotates clockwise and forms a complex conjugate pair with the first one. A true square wave
would have an infinite number of odd-order harmonics (all even order harmonics are zero).
A7
A1
2 A5
2
7 1
A
2 3
2
5 1
t =0
T1
3 1
1 = 2
T1
31
2
51
A3
2 A5 7
1
A1
2
A7
2
2
a)
f (t ) =
1,
1,
T
0< t < 1
2
T1
< t < T1
2
b)
Fig. 1.2.1: A square wave, as shown in b), has an infinite number of odd-order frequency
components, of which the first 4 complex-conjugate phasor pairs are drawn in a) at time
> ! # 8 1=" , where 8 is an integer representing the number of the period.
1 It is interesting that Fourier developed this method in connection with thermal engineering. As a general in
the Napoleon's army he was concerned with gun deformation by heat. He supposed that one side of a straight
metal bar is heated and then bent, joining the ends, to form a thorus. Then he calculated the temperature
distribution along the circle so formed, in such a way that it would be the sum of sinusoidal functions, each
having a different amplitude and a different angular frequency.
-1.11-
P. Stari, E.Margan
In Fig. 1.2.1, we have drawn the complex-conjugate phasor pairs of the first 4
harmonics. Because all the phasor pairs are always complex-conjugate, the sum of any pair,
as well as their total sum, is always real. The phasors rotate with different speeds and in
opposite directions. Fig. 1.2.2a shows them at time X" ) to help the readers imagination.
Although this figure looks confusing, the phasors shown have an exact inter-relationship.
Looking at the positive = axis, the phasor with the amplitude E" # has rotated in the
counter-clockwise direction by an angle of 1%. During the same interval of X" ) the
remaining phasors have rotated: E$ # by $1%; E& # by &1%; E( # by (1%; etc. The
corresponding complex conjugate phasors on the negative = axis rotate likewise, but in the
opposite (clockwise) direction. The sum of all phasors at any instant > is the instantaneous
amplitude of the time domain function. In general, the time function with the fundamental
frequency =" is expressed as:
_
0 > "
8_
E8 4 8 =" >
e
#
E8 4 8 =" >
E# 4 # =" >
E" 4 =" >
e
e
e
#
#
#
E!
= 1 t = /4
7 1
5 1
A1
2
T1
3 1
1
1
1 = 2
T1
3 1
5 1
a)
(1.2.2)
f (t ) =
71
1,
1,
T
0< t < 1
2
T1
< t < T1
2
b)
Fig. 1.2.2: As in Fig. 1.2.1, but at an instant > 1% # 8 1=" ; a) the spectrum,
expressed by complex conjugate phasor pairs, corresponds to the instant > :=" in b).
Note that for the square wave all the even frequency components are missing. For
other types of waveforms the even coefficients can be non-zero. In general Ei may also be
complex, thus containing some non-zero initial phase angle :i . In Eq. 1.2.2 we have also
introduced E! , the DC component, which did not exist in our special case. The meaning of
E! can be understood by examining Fig. 1.2.3a, where the so-called sawtooth waveform is
shown, with no DC component. In Fig. 1.2.3b, the waveform has a DC component of
magnitude E! .
-1.12-
P. Stari, E.Margan
Eq. 1.2.2 represents the complex spectrum of the function 0 >, while Fig. 1.2.1
represents the corresponding most significant part of the complex spectrum of a square
wave. The next step is the calculation of the magnitudes of the rotating phasors.
f (t )
f (t )
A0
A0 = 0
a)
b)
#
#
(1.2.3)
Now we multiply this expression by a unit amplitude, clockwise rotating phasor e4 =k
(having the same angular frequency =k ) to cancel the e4 =k term, [Ref. 1.2]:
Ek 4 :k 4 =k > 4 =k >
Ek 4 : k
e e
e
e
#
#
(1.2.4)
and obtain a non-rotating component which has the magnitude Ek # and phase angle :k at
any time. With this in mind let us attack the whole time function 0 >. The duration of the
multiplication must last exactly one whole period and the corresponding expression is:
X #
Ek
"
4 = >
( 0 > e k .>
#
X
(1.2.5)
X #
Since we have integrated over the whole period X in order to get the average value of that
harmonic component, the result of the integration must be divided by X , as in Eq. 1.2.5. If
there is a DC component (with = !) in the spectrum, the calculation of it is simply:
X #
"
E!
( 0 > .>
X
(1.2.6)
X #
To return to Eq. 1.2.5, let us explain the meaning of the integration Eq. 1.2.5 by
means of Fig. 1.2.4.
-1.13-
P. Stari, E.Margan
By multiplying the function 0 > by e4 =k > we have stopped the rotating phasor
Ek #, while during the time interval of integration all the other phasors have rotated
through an angle of 8 # 1 (where 8 is an integer), including the DC phasor E! , because it is
now multiplied by e4 =k > . The result of the integration for all these rotating phasors is zero,
as indicated in Fig. 1.2.4a, while the phasor Ek # has stopped, integrating eventually to its
full amplitude; the integration for this phasor only is shown in Fig. 1.2.4b.
The understanding of the effect described of the multiplication 0 > e4 = > is
essential to understanding the basic principles of the Fourier series, the Fourier
integral and the Laplace transform.
Ak
2
d A
2 k
Ak
2
i = dk
i = k
a)
b)
Fig. 1.2.4: a) The integral over the full period X of a rotating phasor is zero; b) the integral
over a full period X of a non-rotating phasor . Ek #, gives its amplitude, Ek #, the symbol
. stands for .>X in this figures .> p J> such that J> =k 1%. Note that a stationary
phasor retains its initial angle :k .
For us the Fourier series represents only a transitional station on the journey towards
the Laplace transform. So we will drive through it with a moderate speed via the Main
Street, without investigating some interesting things in the side streets. Nevertheless, it is
useful to make a practical example. Since we have started with a square wave, shown in
Fig. 1.2.5, let us calculate its complex spectrum components E8 #, assuming that the
square wave amplitude is E ".
f (t ) =
1,
1,
T1
1 = 2
T1
A1
A3
T
0< t < 1
2
T1
< t < T1
2
A1
k
k1
A7
A9
Ak =
A5
A11 A13
3 1 51 71 91 111 131
For a single period the corresponding mathematical expression for this function is:
0 >
"
"
for X # > !
for
! > X #
-1.14-
P. Stari, E.Margan
E8
"
4 # 1 8 >X
!
X #
"
X
X
e4 # 1 8 >X
e4 # 1 8 >X
X
4#18
4 # 1 8
X #
!
"
"
e4 1 8 e4 1 8
"
" e4 1 8 e4 1 8 "
4#18
418
#
"
cos 1 8 "
418
(1.2.7)
The result is zero for 8 ! (the DC component E! ) and for any even 8. For any
odd 8 the value of cos 1 8 ", and for such cases the result is:
E8
#
#4
#
418
18
(1.2.8)
The factor 4 in the numerator means that for any positive 8 (and for > !, #1, %1,
'1, ) the phasor is negative and imaginary, whilst for negative 8 it is positive and
imaginary. This is evident from Fig. 1.2.1a.
Let us calculate the first eight phasors by using Eq. 1.2.8. The lengths of phasors in
Fig. 1.2.1a and 1.2.2.b correspond to the values reported in Table 1.2.1. All the phasors
form complex conjugate pairs and their total sum always gives a real value.
Table 1.2.1: The first few harmonics of a square wave
"
&
E8 #
#41
#4$1
#4&1
#4(1
#4*1
""
"$
#4""1 #4"$1
However, a spectrum can also be shown with real values only, e.g., as it appears on
the cathode ray tube screen of a spectrum analyzer. To obtain this, we simply sum the
corresponding complex conjugate phasor pairs (e.g., lE8 #l lE8 #l E8 ) and place
them on the abscissa of a two-dimensional coordinate system, as shown in Fig. 1.2.6. Such
a non-rotating spectrum has only the positive frequency axis. Although such a presentation
of spectra is very useful in the analysis of signals containing several (or many) frequency
components, we will continue calculating with the complex spectra, because the phase
information is also important. And, of course, the Laplace transform, which is our main
goal, is based on a complex variable.
Now let us recompose the waveform using only the harmonic frequency
components from Table 1.2.1, as shown in Fig. 1.2.7a. The waveform resembles the square
wave but it has an exaggerated overshoot $ 18 % of the nominal amplitude.
The reason for the overshoot $ is that we have abruptly cut off the higher harmonic
components from a certain frequency upwards. Would this overshoot be lower if we take
-1.15-
P. Stari, E.Margan
more harmonics? In Fig. 1.2.7b we have increased the number of harmonic components
three times, but the overshoot remained the same. No matter how many, yet for any finite
number of harmonic components, used to recompose the waveform, the overshoot would
stay the same (only its duration becomes shorter if the number of harmonic components is
increased, as is evident from Fig. 1.2.7a and 1.2.7b ).
This is the Gibbs phenomenon. It tells us that we should not cut off the frequency
response of an amplifier abruptly if we do not wish to add an undesirably high overshoot to
the amplified pulse. Fortunately, real amplifiers can not have an infinitely steep high
frequency roll off, so a gradual decay of high frequency response is always ensured.
However, as we will explain in Part 2 and 4, the overshoot may increase as a result of other
effects.
a)
b)
7 harmonics
21 harmonics
1
0.2
0.0
0.2
0.4
t
T
0.6
0.8
1.0
1.2
0.2
0.0
0.2
0.4
t
T
0.6
0.8
1.0
1.2
Fig. 1.2.7: The Gibbs phenomenon; a) A signal composed of the first seven harmonics of a
square wave spectrum from Table 1.2.1. The overshoot is $ 18 % of the nominal amplitude;
b) Even if we take three times more harmonics the overshoot $ is nearly equal in both cases.
In a similar way to that for the square wave, any periodic signal of finite
amplitude and with a finite number of discontinuities within one period, can be
decomposed into its frequency components. As an example the waveform in Fig. 1.2.8
could also be decomposed, but we will not do it here. Instead in the following section we
will analyze another waveform which will allow us to generalize the method of frequency
analysis.
20
f (t ) [V]
15
10
5
0
0
20
40
t [ s]
60
80
100
Fig. 1.2.8: An example of a periodic waveform (a typical flyback switching power supply), having
a finite number of discontinuities within one period. Its frequency spectrum can also be calculated
using the Fourier transform, if needed (e.g., to analyze the possibility of electromagnetic
interference at various frequencies), in the same way as we did for the square wave.
-1.16-
P. Stari, E.Margan
t
0
1
The difference between the continuous square wave spectrum and the spaced square
wave in Fig. 1.3.1 is that the integral of this function can be broken into two parts, one
comprising the length of the pulse, 7 , and the zero-valued part between two pulses of a
length X 7 . The reader can do this integration for himself, because it is fairly simple. We
will only write the result:
sin# c 8 =" 7 % d
E8
4 7
#
8 =" 7 %
(1.3.1)
where =" #1 X , assuming that the pulse amplitude is " (if the amplitude were E it
would simply multiply the right hand side of the equation). For the conditions in Fig. 1.3.1,
where X &7 and E ", the spectrum has the form shown in Fig. 1.3.2, with =7 #17 .
1
= 2
4
= 2
T
2
1
Fig. 1.3.2: Complex spectrum of the waveform in Fig. 1.3.1.
-1.17-
P. Stari, E.Margan
A very interesting question is that of what would happen to the spectrum if we let
the period X p _? In general a function 0 > can be recomposed by adding all its harmonic
components:
_
0 > "
8_
E8 4 8 = " >
e
#
(1.3.2)
where E8 may also be complex, thus containing the initial phase angle :i . Again, as in the
previous section, each discrete harmonic component can be calculated with the integral:
X #
E8
"
4 8 =" >
.>
( 0 > e
#
X
(1.3.3)
X #
For the case in Fig. 1.3.1 the integration should start at > ! and the integral has the form:
X
E8
"
4 8 =" >
.>
( 0 > e
#
X
(1.3.4)
"
4 8 =" 7
. 7 e 4 8 =" >
( 0 7 e
X
8_
0 > "
(1.3.5)
Here we have introduced a dummy variable 7 in the integral, in order to distinguish it from
the variable > outside the brackets. Now we express the integral inside the brackets as:
X
4 8 =" 7
. 7 ( 0 7 e4 # 1 8 7 X . 7 J
( 0 7 e
!
Thus:
0 > "
8_
#81
J 8 ="
X
(1.3.6)
"
" _ #1
"
J 8 =" e4 8 =" >
J 8 =" e4 =" >
X
# 1 8_ X
" _
" =" J 8 =" e4 =" >
# 1 8_
(1.3.7)
where #1X =" . If we let X p _ then =" becomes infinitesimal, and we call it .=. Also
8=" becomes a continuous variable =. So in Eq. 1.3.7 the following changes take place:
_
" (
8_
=" .=
8 =" =
_
With all these changes Eq. 1.3.7 is transformed into Eq. 1.3.8:
_
0 >
"
4=>
( J = e .=
#1
_
-1.18-
(1.3.8)
P. Stari, E.Margan
(1.3.9)
"
E! lim
( 0 > .> !
Xp _ X
(1.3.10)
Eq. 1.3.8 and 1.3.9 are called Fourier integrals. Under certain (usually rather
limited) conditions, which we will discuss later, it is possible to use them for the
calculation of transient phenomena. The second integral ( Eq. 1.3.9 ) is called the direct
Fourier transform, which we express in a shorter way:
Y 0 > J =
(1.3.11)
The first integral (Eq. 1.3.8) represents the inverse Fourier transform and it is
usually written as:
Y " J = 0 >
(1.3.12)
In Eq. 1.3.9, J = means a firm spectrum and the factor e4 = > means the rotation of
each of the corresponding infinite spectrum components contained in J = with its angular
frequency =, which is a continuous variable. In Eq. 1.3.8 0 > means the complete time
function, containing an infinite number of rotating phasors and the factor e4 = > means the
rotation in the opposite direction to stop the rotation of the corresponding rotating phasor
e4 = > contained in 0 >, at its particular frequency =.
Let us now select a suitable time function 0 > and calculate its continuous
spectrum. Since we have already calculated the spectrum of a periodic square wave, it
would be interesting to display the spectrum of a single square wave as shown in
Fig. 1.3.3b. We use Eq. 1.3.9:
_
J = ( 0 > e
7 #
7 #
!
4 = >
4 = >
.> ( " e
(1.3.13)
7 #
Here we have a single square wave with a period X from > 7 # to _.
However, we need to integrate only from > 7 # to > 7 #, because 0 > is zero
outside this interval. It is important to note that at the discontinuity where > !, we have
started the second integral. For a function with more discontinuities, between each of them
we must write a separate integral. Thus it is obvious that the function 0 > must have a
finite number of discontinuities for it to be possible to calculate its spectrum.
-1.19-
P. Stari, E.Margan
J =
"
#
e4 = 7 # e 4 = 7 #
"
" e4 = 7 # e 4 = 7 # "
4 =
4=
#
# 4
=7
# 4
=7
% 4
=7
sin#
" cos
# sin#
=
#
=
%
=
%
=7
sin#
%
4 7
=7
%
(1.3.14)
T=
= 2
= 0
6
2
1
b)
4
a)
Fig. 1.3.3: a) The frequency spectrum of a single square wave is expressed by complex
conjugate phasors. Since the phasors are infinitely many, they merge in a continuous planar
form. Also the spectrum extends to = _. The corresponding waveform is shown in b).
Note that all the even frequency components #187 are missing (8 is an integer).
By comparing Fig. 1.2.1a and 1.3.3a we may draw the following conclusions:
1. Both spectra contain no even frequency components, e.g., at # =7 , % =7 ,
etc., where =7 # 17 ;
2. In both spectra there is no DC component E! ;
3. By comparing Fig. 1.3.2 and 1.3.3 we note that the envelope of both spectra can
be expressed by Eq. 1.3.14;
4. By comparing Eq. 1.3.1 and 1.3.14 we note that the discrete frequency 8 =" from
the first equation is replaced by the continuous variable = in the second equation.
Everything else has remained the same.
-1.20-
P. Stari, E.Margan
c)
f (t )
f (t )
b)
f (t )
d)
f (t )
t
kT
The question arises of whether it is possible to calculate the spectra of the transients
in Fig. 1.3.4c and 1.3.4d by means of the Fourier integral using Eq. 1.3.8?
The answer is no, because the integral in Eq. 1.3.8 does not converge for any of
these two functions. The integral is also non-convergent for the most simple step signal,
which we intend to use extensively for the calculation of the step response of amplifier
networks.
This inconvenience can be avoided if we multiply the function 0 > by a suitable
convergence factor, e.g., e-> , where - ! and its magnitude is selected so that the integral
in Eq. 1.3.2 remais finite when > p _. In this way, the problem is solved for > !. In doing
so, however, the integral becomes divergent for > !, because for negative time the factor
e-> has a positive exponent, causing a rapid increase to infinity. But this, too, can be
avoided, if we assume that the function 0 > is zero for > !. In electrical engineering and
electronics we can always assume that a circuit is dead until we switch the power on or we
apply a step voltage signal to its input and thus generate a transient. The transform where
0 > must be zero for > ! is called a unilateral transform.
For functions which are suitable for the unilateral Fourier transform the following
relation must hold [Ref. 1.3]:
X
Xp _
(1.3.15)
P. Stari, E.Margan
(1.3.16)
If we want this integral to converge to some finite value for > p _, the real constant
must be - 5a , where 5a is the abscissa of absolute convergence. The magnitude of 5a
depends on the nature of the function 0 >. I.e., if 0 > ", then 5a !, and if 0 > e!>
then 5a !, where ! !. By applying the convergence factor e-> , the inverse Fourier
transform obtains the form:
_
->
0 > e
"
4=>
( J - , = e . =
#1
for > !
(1.3.17)
_
Here we must add all the complex-conjugate phasors with frequencies from
= _ to _. Although the direct Fourier transform in our case was unilateral, the
inverse transform is always bilateral. Because in Eq. 1.3.16 we have deliberately
introduced the convergence factor e-> we must limit - p ! after the integral is solved in
order to get the required J =.
Since our final goal is the Laplace transform we will stop the discussion of the
Fourier transform here. We will, however, return to this topic later in Part 6, where we will
discuss the solving of system transfer functions and transient responses using numerical
methods, suitable for machine computation. There we will discuss the application of the
very efficient Fast Fourier Transform (FFT) algorithm to both frequency and time domain
related problems.
-1.22-
P. Stari, E.Margan
where - 5a
(1.4.1)
The formula for an inverse transform is derived from Eq. 1.3.17 if both sides of the
equation are multiplied by e-> . In addition, the simple variable = is now replaced by a new
one: - 4 =. By doing so we obtain:
-4_
"
0 >
(
#14
J - 4 = e- 4 => .- 4 =
(1.4.2)
-4_
If in Eq. 1.4.1 and 1.4.2 the constant - becomes a real variable 5 , both equations are
transformed into the form called Laplace transform. The name is fully justified, since the
French mathematician Pierre Simon de Laplace had already introduced this transform in
1779, whilst Fourier published his transform 43 years later.
It is a custom to denote the complex variable 5 4 = by a single symbol =, which
we also call the complex frequency (in some, mostly mathematical, literature this variable is
also denoted as :). With this new variable Eq. 1.4.1 can be rewritten:
_
(1.4.3)
and this is called the direct Laplace transform, or _ transform. It represents the complex
spectrum J =. The above integral is valid for functions 0 > such that the factor e=>
keeps the integral convergent. If we now insert the variable = in Eq. 1.4.2, we have:
-4_
"
0 > _ eJ =f
(
#14
"
J = e=> .=
(1.4.4)
-4_
-1.23-
P. Stari, E.Margan
The path of integration is parallel with the imaginary axis, as shown in Fig. 1.4.1.
The constant - in the integration limits must be properly chosen, in order to ensure the
convergence of the integral.
j
c + j
direction of
integration
c j
Fig. 1.4.1: The abscissa of absolute convergence the integration path for Eq. 1.4.4.
The factor e=> in Eq. 1.4.3 is needed to stop the rotation of the corresponding
phasor e=> ; there are infinitely many such phasors in the time function 0 >. As our variable
is now complex, = 5 4 =, the factor e=> does not mean a simple rotation, but a spiral
rotation in which the radius is exponentially decreasing with > because of 5 , the real part
of =. This is necessary to cancel the corresponding rotation e=> , contained in 0 >, with a
radius, which, in an exactly equal manner, increases with > [Ref. 1.23].
Since in Eq. 1.4.4 the factor e=> becomes divergent if the exponent de= >f ", the
above conditions for the variable 5 (and for the constant - ) must be met to ensure the
convergence of the integral. In the analysis of passive networks these conditions can always
be met, as we will show in many examples in the subsequent sections.
Now, because we have reached our goal, the Laplace transform and its inverse, we
may ask ourselves what we have accomplished by doing all this hard work.
For the time being we can claim that we have transformed the function of a real
variable > into a function of a complex variable =. This allows us to calculate, using the
_ transform, the spectrum function J = of a finite transient, defined by the function 0 >.
Or, more important for us, by means of the _" transform we can calculate the time domain
function, if the frequency domain function J = is known.
Later we will show how we can transform linear differential equations in the time
domain, by means of the _ transform, into algebraic equations in the = domain. Since the
algebraic equations are much easier to solve than the differential ones, this means one has a
great facility. Once our calculations in the = domain are completed, then by means of the
_" transform we obtain the corresponding time domain function. In this way we avoid
solving directly the differential equations and the calculation of boundary conditions.
-1.24-
P. Stari, E.Margan
! for > !
" for > !
f (t )
f (t )
1
t
a
As we agreed in the previous section, 0 > ! for > ! for all the following
functions, and we will not repeat this statement in further examples. At the same time let us
mention that for our calculations of _ transform it is not important what is the actual value
of 0 !, providing it is finite [Ref. 1.3].
The _ transform for the unit step function 0 a>b 2a>b is:
_
>_
" =>
"
e
=
=
>!
(1.5.1)
1.5.2 Example 2
The function is the same as in Example 1, except that the step does not start at > !
but at > + ! ( Fig. 1.5.2 ):
0 >
Solution:
! for > +
" for > +
>_
" =>
" +=
e
e
=
=
>+
-1.25-
(1.5.2)
P. Stari, E.Margan
1.5.3 Example 3
The exponential decay function is shown in Fig. 1.5.3; its mathematical expression:
0 > e5" >
is defined for > !, as agreed, and 5" is a constant.
Solution:
_
J = ( e
>_
"
"
e5" =>
5" =
5
=
"
>!
(1.5.3)
Later, we shall meet this and the following function and also their product very often.
f (t )
f (t )
sin (2 t / T )
1 t
1
e
0
0
T = 1
1
1.5.4 Example 4
We have a sinusoidal function as in Fig. 1.5.4; its corresponding mathematical
expression is:
0 > sin =" >
where the constant =" #1X .
Solution: its _ transform is:
(1.5.4)
"
e4 =" > e4 =" >
#4
J =
(1.5.5)
"
4 = > =>
4 = > =>
( e " e .> ( e " e .>
#4
!
"
=4 =" >
-1.26-
(1.5.6)
P. Stari, E.Margan
The solution of this integral is, in a way, similar to that in the previous example:
"
"
"
# 4 = 4 ="
= 4 ="
J =
"
= 4 =" = 4 = "
="
#
#4
= # =#"
= =#"
(1.5.7)
"
e4 =" > e4 =" >
#
(1.5.8)
Thus we obtain:
_
J =
"
"
"
"
= 4 =" >
.> ( e= 4 =" > .>
( e
#
# = 4 ="
= 4 ="
#
#
= # =#"
= =#"
1
f (t )
1
(1.5.9)
f (t )
e 1 t
cos (2 t / T )
t
e1 t sin (2 t / T )
1.5.6 Example 6
In Fig. 1.5.6 we have a damped oscillation, expressed by the formula:
0 > e 5" > sin =" >
Solution: we again substitute the sine function, according to Eulers formula:
-1.27-
P. Stari, E.Margan
"
= 5" >
J =
e4 =" > e4 =" > .>
( e
#4
!
"
=5" 4 =" >
e=5" 4 =" > .>
( e
#4
!
"
"
"
="
# 4 = 5" 4 ="
= 5" 4 ="
= 5" # =#"
(1.5.10)
f (t )
tn
>.
> 8.
1.5.7 Example 7
A linear ramp, as shown in Fig. 1.5.7, is expressed as:
0 > >
Solution: we integrate by parts according to the known relation:
( ? .@ ? @ ( @ .?
and we assign > ? and e= > .> .@ to obtain:
_
!!
>_
> e=>
"
( e=> .>
= >!
=
!
>_
" =>
"
e
#
=#
=
>!
1.5.8 Example 8
Fig. 1.5.8 displays a function which has a general analytical form:
0 > > 8
-1.28-
(1.5.11)
P. Stari, E.Margan
Solution: again we integrate by parts, decomposing the integrand >8 e=> into:
? >8
.? 8 >8" .>
" =>
e
=
.@ e=> .>
J = ( > e
!
>8 e=>
.>
=
>_
>!
8
( >8" e=> .>
=
!
8
8" =>
e .>
( >
=
(1.5.12)
8
>8" e=>
8" =>
e .>
( >
=
=
!
>_
>!
8 8 "
8# =>
e .>
( >
=#
!
8 8 "
8# =>
e .>
( >
=#
(1.5.13)
8 =>
J = ( > e
!
8 8 "8 # $ # "
8x
! =>
.>
( > e .> 8"
=8
=
(1.5.14)
1.5.9 Example 9
The function shown in Fig. 1.5.9 corresponds to the expression:
0 > > e5" >
Solution: by integrating by parts we obtain:
_
"
5" =#
(1.5.15)
1.5.10 Example 10
Similarly to Example 9, except that here we have > 8 , as in Fig. 1.5.10:
0 > > 8 e5" >
Solution: we apply the procedure from Example 8 and Example 9:
_
_
8 5" > =>
J = ( > e
!
-1.29-
8x
5" =8"
(1.5.16)
P. Stari, E.Margan
f (t )
f (t )
tn
t
1
1 t
e1 t
t e1 t
0
0.2 f ( t )
t n e1 t
0
0.2 f ( t )
t n e1 t
t e1 t
0.1
0.1
These ten examples, which we frequently meet in practice, demonstrate that the
calculation of an _ transform is not difficult. Since the results derived are used often, we
have collected them in Table 1.5.1.
Table 1.5.1: Ten frequently met _ transform examples
No.
0 >
J =
No.
0 >
J =
"
"
=
'
="
=# 5"# =#"
" + >
e
=
>
"
=#
e5" >
"
5" =
>8
8x
=8"
="
=# =#"
"
5" =#
&
=
=# =#"
"!
8x
5" =8"
-1.30-
P. Stari, E.Margan
="
"
#
=#
= =#"
(1.6.1)
(1.6.2)
(1.6.3)
="
= 5" # =#"
(1.6.4)
.0 >
= J = 0 !
.>
(1.6.5)
(1.6.6)
We will integrate by parts by making 0 > ? and e=> .> .@. The result is:
_
=>
( 0 > e
!
>_
" =>
"
. 0 >
=>
.> 0 >
(
e
e .>
=
=
.>
>!
!
0 !
"
. 0 >
=>
(
e .>
=
=
.>
!
0 !
"
.0 >
_
=
=
.>
(1.6.7)
(
!
. 0 >
. 0 >
=>
e .> _
.>
.>
-1.31-
(1.6.8)
P. Stari, E.Margan
Example:
. e5" >
"
5"
"
= J = 0 ! =
.>
= 5"
= 5"
(1.6.9)
We may also check the result by first differentiating the function e5" > :
. e5" >
5" e5" >
.>
(1.6.10)
5"
= 5"
(1.6.11)
(1.6.12)
(1.6.13)
(1.6.14)
(1.6.15)
>
_ ( 0 7 . 7
-1.32-
J =
=
(1.6.16)
P. Stari, E.Margan
We will derive the proof from the basic definition of the _ transform:
>
_
>
=>
(
_ ( 0 7 . 7
( 0 7 . 7 e .>
!
! !
(1.6.17)
? ( 0 7 .7
.? 0 7 .7
" =>
e
=
.@ e=> .>
(1.6.18)
>
>_
>
" =>
=>
e 0 7 . 7
( ( 0 7 . 7 e .> (
=
>!
(
!
" =>
e 0 > .>
=
(1.6.19)
!
The term between both limits is zero for > ! because '! !, and for > _ as well
_
because the exponential function e
!. Thus only the last integral remains, from which
we can factor out the term "=. The result is:
>
"
J =
=>
_ ( 0 7 . 7
( 0 > e .>
=
=
!
(1.6.20)
(1.6.21)
We have already calculated the transform of this function ( Eq. 1.5.10 ) and it is:
J =
="
a= 5" b# =#"
Let us now calculate the integral of this function according to Eq. 1.6.20 by introducing a
dummy variable 7 :
>
J =
="
=
= c= 5" # =#" d
(1.6.22)
-1.33-
P. Stari, E.Margan
>
J =
=
single integral:
_ ( 0 7 . 7
double integral:
_ ( ( 0 7 . 7
! !
triple integral:
> 7"
J =
=#
> 7" 7#
_ ( ( ( 0 7 . 7
! ! !
78"
>
_ ( ( 0 7 . 7
!
!
8 integrals
8-th integral:
J =
=$
J =
=8
(1.6.23)
The _ transform of the integral of the function 0 > gives the complex function
J ==. The function J = must be divided by = as many times as we integrate.
Here again we see a great advantage of the _ transform, for we can replace the
integration in the time domain (often a rather demanding procedure) by a simple
division by = in the (complex) frequency domain.
1.6.5 Change of Scale
We have the function:
_
(1.6.24)
We introduce a new variable @ + >, and for this .@ + .> and also > @+. Thus
we obtain:
_
_e0 + >f
=>
"
"
=
J
( 0 @ e + .@
+
+
+
(1.6.25)
@!
(1.6.26)
We have already calculated the _ transform of a similar function by Eq. 1.5.15. For the
function above the result is:
J =
"
a= $b#
-1.34-
(1.6.27)
P. Stari, E.Margan
Now let us change the scale tenfold. The new function is:
g> 0 "! > "! > e$! >
(1.6.28)
_eg>f
"
=
J
"!
"!
"
"!
#
=
=
$!b#
a
"!
$
"!
(1.6.29)
! when
> & &p!
(1.6.30)
f (t )
4
A4
f (t ) = (t ) =
A1 = A 2 = A 4
A1
0, t = 0
A = A 1
A2
, t = 0
1
t
0 1 1
4 2
Fig. 1.6.1: The Dirac function as the limiting case of narrowing the pulse width, while keeping
the time integral constant: a) If the pulse length is decreased, its amplitude must increase
accordingly. b) When the pulse length & p ! the amplitude is a"&b p _.
" =>
e .> ( ! e=> .>
&
&
=&
=&
>!
2
(1.6.31)
Paul Dirac, 19021984, English physicist, Nobel Prize winner in 1933 (together with Erwin Schrdinger).
-1.35-
P. Stari, E.Margan
Now we express the function e= & in this result by the following series:
"
=&
=&
#x
$x
and by letting & p ! we obtain:
_e$ >f lim
&p!
(1.6.32)
Therefore the magnitude of the spectrum envelope of this function is one and it is
independent of frequency. This means that the Dirac impulse $> contains all frequency
components, the amplitude of each component being E ".
1.6.7 Initial and Final Value Theorems
The initial value theorem is expressed as:
lim 0 > =lim
= J =
p_
(1.6.33)
!o>
We have written the notation ! o > in order to emphasize that > approaches zero
from the right of the coordinate system. From real differentiation we know that:
_
. 0 >
w
w
=>
_e0 >f ( 0 > e .> = J = 0 !
.>
(1.6.34)
(1.6.35)
=p_
If we assume that 0 > is continuous at > ! we may write the limit of the right
hand side of Eq. 1.6.33:
! =lim
= J = 0 !
p_
(1.6.36)
=p_
!o>
!o>
(1.6.37)
Even if 0 > is not continuous at > !, this relation is still valid, although the proof
is slightly more difficult [Ref. 1.10]. The expression 0 ! is introduced because we are
dealing with a unilateral transform, in which it is assumed that 0 > ! for > !, so to
calculate the actual initial value we must approach it from the positive side of the time axis.
For the functions which we will discuss in the rest of the book we can, in a similar
way, prove the final value theorem, which is stated as:
lim 0 > lim = J =
>p_
=p!
(1.6.38)
(note that for some functions, such as sin =" > or cos =" > or the squarewave, this limit does
not exist, since the value oscillates with the time integral of the function!).
-1.36-
P. Stari, E.Margan
(1.6.39)
=p!
!
c0 ; 0 ! d lim 0 > 0 !
;lim
p_
>p_
(1.6.40)
Although the lower limit of the integral is a (simple) zero we have nevertheless
written ! in the result, to emphasize the unilateral transform. The limit of the right hand
side of Eq. 1.6.39, when = p ! is:
lim = J = 0 !
(1.6.41)
=p!
By comparing the results of Eq. 1.6.34, 1.6.39 and 1.6.41 we may write:
lim 0 > 0 ! =lim
= J = 0 !
p!
>p_
(1.6.42)
>p_
=p!
The Eq. 1.6.37 and 1.6.38 are extremely useful for checking the results of
complicated calculations by the direct or the inverse Laplace transform, as we will
encounter in the following parts of the book. Should the check by these two equations fail,
then we have obviously made a mistake somewhere.
However, this is a necessary, but not a sufficient condition: if the check was passed
we are not guarantied that other sneaky mistakes will not exist, which may become
obvious when we plot the resulting function.
1.6.8 Convolution
We need a process by which we can calculate the response of two systems
connected so that the output of the first one is the input of the second one and their
individual responses are known. We have two functions [Ref. 1.19]:
0 > _" eJ =f
and
(1.6.43)
(1.6.44)
_
=7
J = K= ( 0 7 e
!
. 7 ( g@ e=@ .@
!
-1.37-
(1.6.45)
P. Stari, E.Margan
In order to distinguish better between 0 > and g>, we assign the letter ? for the
argument of 0 and @ for the argument of g; thus, 0 a>b 0 a?b and ga>b ga@b. Since both
variables are now well separated we may write the above integral also in the form:
_
J = K= (
!
=?@
.@ .?
( 0 ? g@ e
(1.6.46)
Let us integrate the expression inside the brackets to the variable @. To do so we introduce a
new variable 7 :
7 ?@
@ 7 ?
so
and
.@ .7
(1.6.47)
We consider the variable 7 in the inner integral to be a (variable) parameter. From the
above expressions it follows that @ ! if 7 ?. By considering all this we may transform
Eq. 1.6.46 into:
_
J = K= (
!
=a?7 ?b
. 7 .?
( 0 ? g7 ? e
(1.6.48)
We may also change the sequence of integration. Thus we may choose a fixed >"
and first integrate from 7 ! to 7 >" . In the second integration we integrate from ? !
to ? _. Then the above expression obtains the form:
_
J = K= (
!
>"
=7
( 0 ? g7 ? .? e . 7
(1.6.49)
Now we can return from ? back to the usual time variable >:
_
J = K= (
!
>"
=7
( 0 > g7 > .> e . 7
C>
(1.6.50)
The expression inside the brackets is the function C> which we are looking for,
whilst the outer integral is the usual Laplace transform. Thus we define the convolution
process, denoted by g> 0 >, as:
>"
(1.6.51)
7 =!
and
g7 > g!
-1.38-
7>
P. Stari, E.Margan
This means that the function is folded in time around the ordinate, from the right
to the left side of the coordinate system. At the end of this part, after we master the network
analysis in Laplace space, we will make an example (Fig. 1.15.1) in which this folding
and the convolution process will be explicitly shown, step by step.
In general we convolve whichever of the two functions is simpler. We may do so
because the convolution is commutative:
g> 0 > 0 > g>
(1.6.52)
The main properties of the Laplace transform are listed in Table 1.6.1.
Property
Real Differentiation
0 >
J =
. 0 >
.>
= J = 0 !
>"
( 0 > .>
Real Integration
Time-Scale Change
Impulse function
0 +>
"
=
J
+
+
$ >
"
lim 0 >
Initial Value
!o>
lim 0 >
Final Value
J =
=
>p_
lim = J =
=p_
lim = J =
=p!
>"
Convolution
-1.39-
J = K=
P. Stari, E.Margan
@ >
.3
P
.>
(1.7.1)
By assuming time > ! and 3! !, then, according to Eq. 1.6.5, the _ transform of the
above equation is the voltage across the inductance in the = domain:
Z = = P M=
(1.7.2)
(1.7.3)
Here = 5 4 = and thus it is complex; it can lie anywhere in the = plane. In the
special case when 5 !, and considering only the positive 4 = axis, = degenerates into 4 =.
Then the inductive reactance becomes the familiar 4 = P, as is known from the usual
phasor analysis of networks.
i
i
+
i
t
C
t
b)
a)
c)
1.7.2 Capacitance
From basic electrical engineering we also know that the instantaneous voltage @>
across a capacitance through which a current 3> flows during a time > ! is:
>
@>
;>
"
( 3 .>
G
G
(1.7.4)
as shown in Fig. 1.7.1b. Here ;> is the instantaneous charge on the capacitor G . By
applying Eq. 1.6.20 we may calculate the voltage on the capacitor in the = domain:
Z =
"
M=
=G
-1.41-
(1.7.5)
P. Stari, E.Margan
Z =
"
M=
=G
(1.7.6)
(1.7.7)
and, as there are no time-derivatives the same holds in the = domain, with the
corresponding values Z = and M=:
Z = V M=
(1.7.8)
Z =
V
M=
(1.7.9)
yielding:
^=
"
=G
"
="
VG
V
V
V K" =
"
= ="
=
VG
"
V
(1.7.10)
where the (real) pole is at =" 5" "VG and K" represents the frequency dependence
The pole of a function is that particular value of the argument for which the function
denominator is equal to zero and, consequently, the function value goes to infinity.
Now let us apply a current step, M= " VV , to our network expressed in the =
domain as "=V, according to Eq. 1.5.1. We introduced the factor "V in order to get a
voltage of " V on our VG combination when > p _. The corresponding function is then:
"
"
V K" =
K" =
=V
=
"
"
"
"
VG
=
=
VG
J = Z = M= ^=
-1.42-
(1.7.11)
P. Stari, E.Margan
_e5" >
_"
or inversely:
"
= 5"
(1.7.12)
"
5 >
e "
= 5"
(1.7.13)
By comparing Eq. 1.7.11 with Eq. 1.7.13 we see that 5" "VG :
"
>VG
e
= a"VG b
_"
(1.7.14)
From Eq. 1.6.20 we concluded that the division in the = domain corresponds to the
real integration in the > domain. By considering this together with Eq. 1.5.1, we obtain:
>
"VG
"
>VG
.>
( e
= c= a"VG bd
VG
!
>
"
VG e>VG e>VG a"b " e>VG
VG
!
1
g1( t ) = e
a)
t
RC
1
h (t ) =
0,
t <0
1 V/ R , t > 0
b)
0
g2( t ) = e
t
RC
s0 = 0
ii
R
j
1
c)
f (t ) = 1 e
C
i i R = 1V
s1 = 1
RC
(1.7.15)
t
RC
s1
s0
Fig. 1.7.2: The course of mathematical operations for a parallel V G network excited by a unit step
current 3i . The > domain functions are on the left, the = domain functions are on the right. a) The selfdischarge network function is equal to the impulse function g" >. b) The unit step in > domain, 2>, is
represented by a pole at the origin (=! ) in the = domain. The function g# > is the reaction of the network
to the unit step excitation. c) The output voltage is the sum of both functions, @9 0 > 2> g# >.
-1.43-
P. Stari, E.Margan
From this simple example we obtain the idea of how to use tables of _ transforms
to obtain the response in the > domain, which should otherwise be calculated by differential
equations. In addition to this we may state a very important conclusion for the =-domain:
network function
output function
L a=b
"
=V
"
VG
V K" a=b V
"
=
VG
"
J a=b
=
also named
impulse response
"
VG
"
=
VG
"
0 > _ eJ =f
(
#14
"
J = e=> .=
-4_
but it would not be fair to leave the reader to grind through this integral of his J = with
the best of hisher knowledge. In essence the above expression is a contour integral.
Knowledge of contour integration is a necessary prerequisite for calculating the
inverse Laplace transform. We will discuss this in the following section. After studying it
the reader will realize that the calculation of the step-response in the > domain by contour
integration is although a little more difficult than in the above example of the simple
VG circuit still a relatively simple procedure.
-1.44-
P. Stari, E.Margan
y = x1
2
A=
1
x1
2
y = x1
x2
x1
y dx
x2
2
1
2
3
-1.45-
(1.8.1)
P. Stari, E.Margan
The area above the B axis is counted as positive and the area below the B axis (if
any) as negative. The area E in Fig. 1.8.1 represents the difference of the integral values at
the upper limit, J B# and the lower limit, J B" . As shown in Fig. 1.8.1 the integration
path was from B" along the B axis up to B# .
For a comparison let us now calculate a similar integral, but with a complex
variable D B 4 C:
D#
[ (
D"
"
D#
.D J D# J D" ln D# ln D" ln
D
D"
(1.8.2)
So far we can not see any difference between Eq. 1.8.1 and 1.8.2 (a close
investigation of the result would show that it may be multi-valued in the case the path from
D" to D# circles the pole one or more times; but we will not discuss such cases). The whole
integration procedure is the same in both cases. The difference in the result of the second
equation becomes apparent when we express the complex variable D in the exponential
form:
D" kD" k e4 )"
D# kD# k e4 )#
and
(1.8.3)
then:
ln
D#
kD# k e4 )#
kD# k 4)# )"
kD# k
ln
ln
ln
4)# )" ? 4 @
e
D"
kD" k e4 )"
kD" k
kD" k
where:
and also:
? ln
kD# k
kD" k
and
@ )# ) "
and
)i arctan
(1.8.4)
(1.8.5)
Ci
Bi
(1.8.6)
Obviously the result of Eq. 1.8.2, as shown in Eq. 1.8.4, is complex. It can not be
plotted as simply as the integral of Fig. 1.8.1, since for displaying the complex function of a
complex argument we would need a 4D graph, whilst the present state of technology allows
us to plot only a 2D projection of a 3D graph, at best.
We can, however, restrict the D arguments domain, as in Fig. 1.8.2, by making its
real part a constant, say, B - and then make plots of J - 4 C "- 4 C for some
selected value of - . In Fig. 1.8.2 we have chosen - ! and - !&, whilst the imaginary
part was varied from 4 $ to 4 $.
In this way we have plotted two graphs, labeled A and B. The graph A belongs to
- ! and lies in the e= eJ = plane; it looks just like the one in Fig. 1.8.1, but
changed in sign, owing to the following rationalization of the functions denominator:
"
4
4
4
#
4C
4 C
" C
C
The graph B belongs to - !& and is a 3D curve, twisting in accordance with the
phase angle of the function. To aid the 3D view the three projections of F have also been
plotted.
-1.46-
P. Stari, E.Margan
1
0
{F ( z )}
1
2
3
0 c
1
{ z }
2
{F ( z )}
{ z}
Fig. 1.8.2: By reducing the complex domain B 4 C to - 4 C, where - is a constant, we can plot the
complex function J - 4 C in a 3D graph. Here we have - ! (graph A) and - !& (graph B).
Also shown are the three projections of F. The twisted surface is the integral of J - 4 C for
- !& and C in the range 4# C 4 #. See Appendix 1 for more details.
"
!& 4 $
!& 4 $
!& 4 $!& 4 $
!& 4 $
!& 4 $
!&
$
4
!!&%" 4 !$#%$
!&# $#
!#& *
*#&
*#&
J - 4 C
"
!& 4 !&
!& 4 !&
!& 4 !&!& 4 !&
!& 4 !&
!& 4 !&
!&
!&
4
"4
!&# !&#
!#& !#&
!&
!&
(here both the real and the imaginary part are " this is the top point of the curve).
c) an obvious choice is - !& and C !, thus:
J - 4 C
"
"
#
!& 4 !
!&
(here the real part is #, the imaginary part is ! and this is the right-most point on the
curve; also, it is its only real value point).
For positive imaginary values, J D is the complex conjugate of the values above.
-1.47-
P. Stari, E.Margan
Now that we have some idea of how J D looks, let us return to our integral
problem. If the integration path is parallel to the imaginary axis, 4 # C 4 #, and
displaced by B d D - !&, the result of integration would be the surface indicated
in Fig. 1.8.2 But for an arbitrary path, with B not constant, we should make many such
plots as above and then trace the integration path to appropriate curves. The area bounded
by the integration path and its trace on those curves would be the result we seek.
For a detailed treatment of complex function plotting see Appendix 1.
Returning to the result of Eq. 1.8.4 we may draw an interesting conclusion:
The complex line integral depends only on the initial value D" and the final
value D# , which represent both limits of the integral.
The result of the integration is independent of the actual path beneath these limits,
providing the path lies on the same side of the pole.
All the significant differences between an integral of a real function and the line
integral of a complex function are listed in Table 1.8.1.
The B axis is the arguments domain for a real integral, whilst for a complex
integral it is the whole D plane. Do not confuse the D plane (the complex plane, B 4C , with
the diagrams D axis (vertical axis), which here is J D J B 4C. We recommend the
readers to ponder over Fig. 1.8.2 and try to acquire a clear idea of the differences between
both types of integral, since this is necessary for the understanding of the discussion which
follows.
Table 1.8.1 Differences between real and complex integration
real variable
B#
integral
(
B"
independent
variable
result
D#
"
.x
B
(
D"
dependent
variable
integration
path
complex variable
D B4C
"
B
from B" to B#
along the B axis
ln
B#
B"
"
.D
D
(real)
"
B
C
?4@ #
4 #
D
B C#
B C#
D#
kD# k
ln
4 )# )" (complex)
D"
kD" k
-1.48-
P. Stari, E.Margan
1.8.1 Example 1
We have a function 0 D $D which we shall integrate from #4 to " 4:
"4
$ D#
( $D .D
#
#4
"4
$
" 4# # 4# ' $ 4
#
#4
1.8.2 Example 2
The integration limits are the same as in the previous example, whilst the function is
different, 0 D " D # :
"4
"4
#
( " D .D D
#4
#4
"4
D$
$
#4
"
(
4
$
$
1.8.3 Example 3
The same function as in Example 1, except that both limits are interchanged:
#4
( $ D .D
"4
$ D#
#
#4
"4
$
# 4# " 4# ' $ 4
#
We see that although the function under the integral is complex, the same rules
apply for integration as for a function of a real variable. The last example shows us that if
the limits of the integral of a complex function are exchanged the result of the integration
changes the sign.
As already mentioned, the result of the integration of a complex function is
independent of the actual path of integration between the limits D" and D# (see Fig. 1.8.3),
provided that no pole lies between the extreme paths P" and P# . Thus for all the paths
shown the result of integration is the same. This means that the function in the area between
P" and P# is analytic. When at least one pole of the function lies between P" and P# , the
integral along the path P" is in general no more equal to the integral along the path P# . In
Fig. 1.8.4 we show such a case, in which the function is non-analytic (or non-regular)
inside a small area between D" and D# (in the remaining area the function is analytic).
z2
jy
jy
L1
z2
L1
L2
L2
z1
z1
x
Fig. 1.8.4: Here the function has a non-analytic
domain area between P" and P# . Now the integral
along the path P" is not equal to the integral along
the path P# .
-1.49-
P. Stari, E.Margan
(
4
"
.D
D
(
1#
4 e4 )
1
. ) 4 )
4
e4 )
#
1#
1.8.5 Example 5
Here everything is the same as in the previous example, except that we will
integrate along the path P# of Fig. 1.8.5. In this case the angle ) goes from $1# to !:
!
(
$1#
!
4 e4 )
$1
. ) 4 )
4
4
)
e
#
$1#
(
4
"
1
.D ln " ln 4 ln e4 1# 4
D
#
-1.50-
P. Stari, E.Margan
jy
z=
1
L2
z0
z0
L1
L3
L1
Fig. 1.8.5: The integral along the path P" is not equal
to the integral along the path P# because the function
has a pole which lies between both paths.
jy
jy
Lc
z=
1
C
Ld
x
z0
Lb
z0
La
-1.51-
P. Stari, E.Margan
(
!
#1
.D
4(
D
!
#1
e4 ) . )
.D
4( .) #14 (
e4 )
D
(1.9.1)
The resulting integral along the circle G is called the contour integral; the arrow in
the symbol indicates the direction of encirclement of the pole (at D !).
Now let us move the pole from the origin to the point + Ba 4 Ca . The
corresponding function is then 0 D "D +. The first attempt would be to integrate
along the contour G as shown in Fig. 1.9.1. Inside this contour the domain of the function is
analytic, except for the point +. Unfortunately G is a random contour and can not be
expressed in a convenient mathematical way. Since + is the only pole inside the contour G ,
we may select another, simpler integration path. As we have already mastered the
integration around a circular path, we select a circle Gc with the radius & that lies inside the
contour G . From Fig. 1.9.1 it is evident that:
& kD +k or D + & e4 )
(1.9.2)
D & e4 ) +
(1.9.3)
Thus:
where the angle ) can have any value in the range ! #1. Furthermore it follows that:
.D 4 & e4 ) .)
(1.9.4)
(
Gc
.D
(
D+
!
#1
4 & e4 ) . )
4( .) # 1 4
& e4 )
(1.9.5)
The result is the same as we have obtained for the function 0 D "D , in which
the pole was at the origin of the D plane.
jy
jy
C
Cc
a
x
Fig. 1.9.1: Contour integral around the pole at +.
-1.53-
P. Stari, E.Margan
We look again at Fig. 1.8.3, where the integral around the path P" is equal to the
integral around the path P# because there is no pole between P" and P# . It would be
interesting to make the integral from D" to D# along the path P# and then back again from D#
to D" along the path P" , making a closed loop (contour) integral:
D#
D"
( 0 D .D ( 0 D .D ( 0 D .D
D"
D#
along P#
(1.9.6)
along P# P"
along P"
Since both integrals have the same magnitude, by exchanging the limits of the
second integral, thus making it negative, their sum is zero. This statement affords us the
conclusion that the integral around the contour G in Fig. 1.9.2, which encircles an area
where the function is analytic, is zero (the only pole + in the vicinity lies outside the
contour of integration). This is expressed as:
( 0 D .D !
(1.9.7)
The expressions in Eq. 1.9.6 and 1.9.8 were derived by the French mathematician
Augustine Louis Cauchy (17881857). In all the calculations so far we have integrated in a
counter-clockwise sense, having the integration field, including the pole, always on the left
side. In the case of a clockwise direction, let us again take Eq. 1.9.1 and integrate clockwise
from #1 to !:
!
e4 ) .)
.D
.D
4(
4( .) #14 )
(
e4 )
D
D
#1
#1
(1.9.8)
#1
Note that the sign of the result changes if we change the direction of encirclement. So we
may write in general:
( 0 D .D ) 0 D .D
G
-1.54-
(1.9.9)
P. Stari, E.Margan
gD
0 D
D+
(1.10.1)
This function is also analytic inside the contour G , except at the point +, where it has a
pole, as shown in Fig. 1.10.1. Let us take the integral around the closed contour G :
(
G
0 D
.D
D+
(1.10.2)
which is similar to the integral in Eq. 1.9.6, except that here we have 0 D in the numerator.
Because at the point + the function under the integral is not analytic, the path of integration
must avoid this point. Therefore we go around it along a circle of the radius &, which can be
made as small as required (but not zero).
For the path of integration we shall use the required contour G and the circle Gc . To
make the closed contour the complete integration path will start at point 1 and go counterclockwise around the contour G to come back to the point 1; then from the point 1 to the
point 2 along the dotted line; then clockwise around the circle Gc back to the point 2; and
finally from the point 2 back to the point 1 along the dotted line. In this way, the contour of
integration is closed. The integral from the point 1 to 2 and back again is zero. Thus there
remain only the integrals around the contour G and around the circle Gc .
jy
C
z
Cc
x
Fig. 1.10.1: Cauchys method of expressing analytic functions (see text).
Since around the complete integration path the domain on the left hand side of the
contour was always analytical, the resulting integral must be zero. Thus:
(
GGc
0 D
0 D
0 D
.D (
.D )
.D !
D+
D+
D+
G
Gc
-1.55-
(1.10.3)
P. Stari, E.Margan
(
G
0 D
0 D
.D (
.D
D+
D+
(1.10.4)
Gc
Here we have changed the the second integral sign by reversing the sense of encirclement.
Similarly as in Eq. 1.9.2 and 1.9.4 we write:
D + & e4 )
.D 4 & e4 ) .)
and
.D
4 .)
D+
(1.10.5)
thus obtaining:
#1
#1
0 D
(
.D 4 0 +( . ) 4( c0 D 0 +d . )
D+
G
(1.10.6)
The integration must go from ! to #1 in order to encircle the point + in the required
direction. The value of the first integral on the right is:
#1
4 0 +( . ) #14 0 +
(1.10.7)
and we will prove that the second integral is zero. Its magnitude is:
Q #1 max 0 D 0 +
(1.10.8)
lim ( c0 D 0 +d . ) !
&p!
(1.10.9)
Thus:
0 D . D
D+
(1.10.10)
"
0 D .D
(
#1 4
D+
(1.10.11)
#14 0 + (
G
and:
0 +
-1.56-
P. Stari, E.Margan
calculate the value of an analytic function at any desired point (say, +) if all the values on
the contour surrounding this point are known. Thus if:
gD
0 D
D+
then the value 0 + is called the residue of the function gD for the pole +.
To make the term residue clear let us make a practical example. Suppose gD is a
rational function of two polynomials:
gD
T D
D 7 ,7" D 7" ,7# D 7# ," D ,!
8
UD
D +8" D 8" +8# D 8# +" D +!
(1.10.12)
where ,i and +i are real constants and 8 7. Eq. 1.10.12 represents a general form of a
frequency response of an amplifier, where D can be replaced by the usual = 5 4= and
,! +! is the DC amplification (at frequency = !). Instead of the sums, the polynomials
T D and UD, and thus gD, may also be expressed in the product form:
gD
D D" D D# D D7
D :" D :# D :8
(1.10.13)
In this equation, D" , D# , , D7 are the roots of the polynomial T D, so they are also
the zeros of gD. Similarly, :" , :# , , :8 are the roots of the polynomial UD and
therefore also the poles of gD. Both statements are valid if :i Di for any i that can be
applied to Eq. 1.10.13 (if D D" were equal to, say, D :$ , there would be no pole at :$ ,
because this pole would be canceled by the zero D" ). Now we factor out the term with one
pole, i.e., "D :# and write:
gD
D D" D D# D D7
"
"
0 D
D :" D :$ D :8 D :#
D :#
where:
0 D
D D" D D# D D7
D :" D :$ D :8
(1.10.14)
(1.10.15)
(1.10.16)
The word residue is of Latin origin and means the remainder. However, since a
remainder may also appear when we divide a polynomial by another, we shall keep using
the expression residue in order to avoid any confusion. Also in our further practical
calculations we will simply write, say, res# instead of the complete expression res# J =.
-1.57-
P. Stari, E.Margan
J =
= #= $
= %= &= '
We need to calculate the three residues of J = for the poles at = %, = & and
= ':
res" lim = %
= p %
= #= $
% #% $
"
= %= &= '
% &% '
= #= $
'
= %= &= '
= #= $
'
= %= &= '
= p &
= p '
An interesting fact here is that since all the poles are real, all the residues are real as
well; in other words, a real pole causes the residue of that pole to be real.
1.10.2 Example 2
Our function is:
J =
= # e=>
$ =# * = *
Here we must consider that the variable of the function J = is only = and not >.
First we tackle the denominator to find both roots, which are the poles of our function:
$ =# * = * $ =# $ = $ !
-1.58-
P. Stari, E.Margan
Thus:
$
$
$
$
$ 4
#
#
#
#
= # e=>
$ = =" = =#
We shall carry out a general calculation of the two residues and then introduce the
numerical values for 5" and =" .
res" =lim
= ="
p=
"
= # e=>
=" # e=>
$ = =" = =#
$ =" =#
We now set 5" $# and =" $ # to obtain the numerical value of the residue:
res"
$ >#
' 4 $
$ 4
" 4 $ $ ># 4$ >#
e
e
"# 4 $
"# $
= # e=>
=# # e=>
$ = =" = =#
$ =# ="
$ 4
Since both poles are complex conjugate, both residues are complex conjugate as
well. In rational functions, which will appear in the later sections, all the poles will be
either real, or complex conjugate, or both. Therefore the sum of all residues of these
functions (that is, the time function) will always be real.
-1.59-
P. Stari, E.Margan
K=
J =
= +8
(1.11.1)
To calculate the residue we first expand J = into a Taylor series [Ref. 1.4, 1.11]:
J = = +8 K=
(1.11.2)
J +
J +w = +
J +ww = +#
J +8" = +8"
!x
"x
#x
8 "x
J =
= + 8
(1.11.3)
J +
J w +
J ww +
J 8" +
8
8"
8#
= +
= +
#x= +
8 "x = +
K=
E8"
E8#
E8
E"
= +8
= +8"
= +8#
= +
E! E" = + E# = +#
(1.11.4)
The sum of all fractions from the above function we call the principal part and the
rest is the analytic part (also known as the regular part).
Eq. 1.11.4 is named the Laurent series, after the French mathematician PierreAlphonse Laurent, 18131854, who in 1843 described a series with negative powers.
A general expression for the Laurent series is:
+_
J = " E8 = +8
87
-1.61-
(1.11.5)
P. Stari, E.Margan
( J = .= ( " E8 = +8 .=
G
(1.11.6)
87
(1.11.7)
.= 4 & e4 ) .)
= +8 &8 e4 8 )
#1
(1.11.8)
4 &8"
&8"
e48"#1 " !
e48")
4 8 "
8 "
)!
(1.11.9)
because e4 5 #1 ", for any positive or negative integer 5, including !. For 8 ", we
derive from Eq. 1.11.8:
#1
.=
(
(
=+
G
4 & e4 ) . )
#14
& e4 )
(1.11.10)
In order that the result corresponds to the Laurent series we must add the constant
factor with 8 " and this is E" . Eq. 1.11.8 to 1.11.10 prove that the contour integration
for the complete Laurent series K= yields only:
( K= .= E" #14
(1.11.11)
Thus from the whole series ( Eq. 1.11.4 ) only the part with E" remained after the
integration. If we return to Eq. 1.11.3 we conclude that:
E"
"
J =
J 8" +
( K= .= res
#1 4
= +8
8 "x
G
=lim
p+
"
.8"
= +8 K=
8 "x .=8"
-1.62-
(1.11.12)
P. Stari, E.Margan
K=
J =
= +$
Our task is to calculate the general expression for the residue of the triple pole
8 $ at = +. According to Eq. 1.11.12 it is:
res =lim
p+
"
. $"
= +$ K=
$ "x .=$"
"
.#
$
# = + K=
# .=
=p+
1.11.2 Example 2
Here we shall calculate with numerical values.
We intend to find the residues for the double pole at = # and for the single pole
at = $ of the function:
J =
&
= ## = $
Solution:
res" =lim
p #
"
.#"
&
= ##
# "x .=#"
= ## = $
.
&
&
&
.= = $ = p#
= $# = p#
res# =lim
= $
p $
&
&
&
= ## = $
= ## = p$
-1.63-
P. Stari, E.Margan
R
s2
s1
s2
s4
s3
C
C2
s4
s3
s5
C3
C1
s1
C4
A
C5
s5
Impossible? No! If we take a knife and make a cut from the crust towards each hole
without removing any cheese, we provide the necessary path for the suggested contour, as
shown in Fig. 1.12.2.
Now we calculate a contour integral starting from the point E in the suggested
(counter-clockwise) direction until we come to the cut towards the first pole, =" . We follow
the cut towards contour G" , follow it around the pole and then go along the cut again, back
to the crust. We continue around the crust up to the cut of the next pole and so on, until we
arrive back to point E and close the contour. Since we have not removed any cheese in
making the cuts, the paths from the crust to the corresponding hole and back again cancel
out in this integration path. As we have proved by Eq. 1.9.6:
,
( J = .= ( J = .= !
+
Therefore, only the contour G around the crust and the small contours G" , , G&
around the rims of the holes containing the poles are what we must consider in the
integration around the contour in Fig. 1.12.2. The contour G was taken counter-clockwise,
whilst the contours G" , , G& were taken clockwise.
-1.65-
P. Stari, E.Margan
G"
(1.12.1)
G&
The result of integration is zero because along this circuitous contour of integration
we have had the regular domain always on the left side. By changing the sense of encircling
of the contours G" , , G& we may write Eq. 1.12.1 also in the form:
( J = .= ( J = .= ( J = .=
G
G"
(1.12.2)
G&
When we changed the sense of encircling, we changed the sign of the integrals; this allows
us to put them on the right hand side with a positive sign. Now all the integrals have
positive (counter-clockwise) encircling. Therefore the integral encircling all the poles is
equal to the sum of the integrals encircling each particular pole.
By observing this equation we realize that the right hand side is the sum of residues
for all the five poles, multiplied by #14. Thus for the general 8-pole case the Eq. 1.12.2 may
also be written as:
8
(1.12.3)
3"
Eq. 1.12.2 and 1.12.3 are called the CauchyGoursat theorem; they are essentially
important for the inverse Laplace transform.
-1.66-
P. Stari, E.Margan
J = e=> .=
-4_
The reader is invited to examine Fig. 1.13.1, where the function lJ =l l"=l was
plotted. The function has one simple pole at the origin of the complex plane. The resulting
surface has been cut between 4 and " to expose an arbitrarily chosen integration path P
between =" B" 4 C" ! 4 !& and =# B# 4 C# !& 4 ! (see the integration
path in the plot of the = domain in Fig. 1.13.2).
20
15
| F (s ) |
10
| F (L) |
5
A
0
1.0
0.5
0.0
{s}
s1
1.0
0.5
s2
0.5
1.0
0.5
1.0
0.0
{ s }
Fig. 1.13.1: The complex function magnitude, lJ =l l"=l. The resulting surface has been cut
between 4 and " to expose an arbitrarily chosen integration path P, starting at =" ! 4 !& and
ending at =# !& 4 !. On the path of integration the function lJ =l has a maximum value Q .
{ s
}
{
s}
s1
0.5 j
0.5
s2
1
A
L
j
Fig. 1.13.2: The complex domain of Fig. 1.13.1
shows the arbitrarily chosen integration path P,
from =" ! 4 !& to =# !& 4 !.
Let us take a closer look at the area E between =" , =# , lJ =" l and lJ =# l, shown in
Fig 1.13.3. The area E corresponds to the integral of J = from =" to =# and it can be shown
that it is always smaller than, or at best equal to, the rectangle Q P:
-1.67-
P. Stari, E.Margan
="
=#
=#
=#
.=
l.=l
( Q k.=k Q P
( J = .= (
(
=
l=l
="
="
="
="
(1.13.1)
Here Q is the greatest value of kJ =k for this particular path of integration P, as shown in
Fig. 1.13.3, in which the resulting 3D area between =" , =# , lJ =" l and lJ =# l was
stretched flat. So:
=#
=#
(1.13.2)
( J = .= ( J = .= Q P
="
="
Eq. 1.13.2 is an essential tool in the proof of the inverse _ transform via the
integral around the closed contour.
Let us now move to network analysis, where we have to deal with rational functions
of the complex variable = 5 4 =. These functions have a general form:
=7 ,7" =7" ," = ,!
=8 +8" =8" +" = +!
J =
(1.13.3)
where 7 8 and both are positive and real. Since we can also express = V e4) (as can be
derived from Fig. 1.13.4), we may write Eq. 1.13.3 also in the form:
J =
(1.13.4)
V 7 e4 7 ) , !
O
87 Q
V 8 e4 8 ) +!
V
(1.13.5)
where O is a real constant and Q is the maximum value of J = within the integration
interval, according to Fig. 1.13.1 and 1.13.3 (in [Ref. 1.10, p. 212] the interested reader can
find the complete derivation of the constant O ).
j
4)
s1
j 1
=" V sin )
Fig. 1.13.4: Cartesian and polar representations of a complex number (note: tan ) is equal for the
counter-clockwise defined ) from the positive real axis and for the clockwise defined " ) 1.
-1.68-
P. Stari, E.Margan
Let us draw the poles of Eq. 1.13.3 in the complex plane to calculate the integral
around an inverted D-shaped contour, as shown in Fig. 1.13.5 (for convenience only three
poles have been drawn there). Since Eq. 1.13.3 is assumed to describe a real passive system,
all poles must lie either on the left side of the complex plane or at the origin. As we know,
the integral around the closed contour embracing all the poles is equal to the sum of
residues of the function J =:
5a 4 ="
5a 4 ="
(1.13.6)
3"
The contour has two parts: the straight line from 5a 4 =" to 5a 4 =" , where 5a is a
constant (which we will define more exactly later) and the arc I V # , where # is the arc
angle and V is its radius. According to Eq. 1.13.2, the line integral along the path P is:
( J = .= Q P
(1.13.7)
where Q is the maximum value of the integral (magnitude!) on the path P. In our case:
O
V87
and P # V I
(1.13.8)
j
s2
j 1
s1
s3
j 1
Fig. 1.13.5: The integral along the inverted D-shaped contour encircling the poles is equal
to the sum of the residues of each pole. This contour is used to prove the inverse Laplace
transform, where the integral along the arc vanishes if V p _, provided that the number of
poles exceeds the number of zeros by at least # (in this example no zeros are shown).
O
O#
( J = .= 87 # V 87"
V
V
I
If V p _:
lim
Vp_
O#
! only if
V87"
-1.69-
87#
(1.13.9)
(1.13.10)
P. Stari, E.Margan
If the condition of Eq. 1.13.10 holds, only the straight part of the contour counts
because if V p _ then also =" p _, thus changing the limits of the integral along the
straight path accordingly. If we make these changes to Eq. 1.13.6, it shrinks to:
5a 4_
(1.13.11)
3"
5a 4_
The function J = may also contain the factor e=> , where d = 5a and > !. In
this case the constant 5a , which is called the abscissa of absolute convergence [Ref. 1.3,
1.5, 1.8], must be small enough to ensure the convergence of the integral. The factor e=> is
always present in the inverse _ transform. Let us write this factor down and let us divide
Eq. 1.13.11 by the factor #14. In this way the integral obtains the form:
5a 4_
"
=>
=>
0 > _ eJ =f
( J = e .= " res J = e
#14
"
(1.13.12)
5a 4_
and this is the formula for the inverse _ transform [Ref. 1.3, 1.5, 1.8]. The above integral is
convergent for > !, which is the usual constraint in passive network analysis. This
constraint will also apply to all derivations which follow.
In the condition written in Eq. 1.13.10 we see that the order of the denominators
polynomial must exceed the order of the numerator by at least two, otherwise we could not
prove the inverse _ transform by the method derived above. This means that the number of
poles must exceed the number of zeros by at least two. However, in network theory we
often deal with the input functions called positive real functions [Ref. 1.16]. The degree of
the denominator in these functions may exceed the degree in the numerator by one only. To
prove the inverse _ transform for such a case, we must reach for another method. The proof
is possible by using a rectangular contour [Ref. 1.5, 1.13, 1.17]:
When the degree of the denominator exceeds the degree of the numerator by one
only, Eq. 1.13.5 is reduced to:
J =
O
Q
V
(1.13.13)
"
"
= =p
= 5p
(1.13.14)
This is a single-pole function, with the pole on the negative real axis (for our
calculations it is not essential that the pole lies on the real axis, but in the theory of real
passive networks, a single-pole always lies either on the negative 5 axis or at the origin of
the complex plane).
The pole and the rectangular contour with the sides I" , I# , I$ and I% are shown in
Fig. 1.13.6. We will integrate around this rectangular contour. At the same time we let both
5" p _ and =" p _. Next we will prove, considering these limits, that the line integrals
along the sides I# , I$ and I% are all equal to zero.
-1.70-
P. Stari, E.Margan
j
2
j 1
sp
3
j 1
Fig. 1.13.6: By using a rectangular contour as shown it is possible to prove the inverse
Laplace transform by means of the contour integral, even if the number of poles
exceeds the number of zeros by only one. In this integral, encircling the single simple
pole, we let 51 p _ and =1 p _, so that the integrals along I# , I$ and I% vanish.
(1.13.15)
5a 4="
Here we will include the factor e=> (which always appears in the inverse
_ transform) at the very beginning, because it will help us in making the integral along I$
convergent. Let us start with the integral along the side I# , where =" is constant:
5"
=>
5 4 =" >
.5
( J = e .= ( J a5 4 =" b e
I#
5a
5a
(
5"
O 5>
e .5
5"
O "
e5a > e5" > p !
5" >
5" p_
(1.13.16)
Since we are calculating the absolute value, we can exchange the limits of the last integral.
The integral along I% is almost equal:
5a
=>
5 4 =" >
.5
( J = e .= ( J a5 4 =" b e
I%
5 "
5a
(
5"
O 5>
e .5
5"
O "
e5a > e5" > p !
5" >
5" p_
-1.71-
(1.13.17)
P. Stari, E.Margan
4 ="
I $
4 ="
="
(
="
O 5" >
e
.=
5"
O 5" >
e
=" =" p !
5" p_
5"
(1.13.18)
=" p_
Since the integrals along I# , I$ and I% are all equal to zero if 5" p _ and =" p_,
only the integral along I" remains, which, in the limit, is equal to the integral along the
complete rectangular contour and, in turn, to the sum of the residues of the poles of J a=b:
5a 4 ="
5a 4_
lim
J = e=> .=
=1 p_ (
5a 4 ="
=>
=>
( J = e .= ( J = e .=
I
5a 4_
(1.13.19)
If this equation is divided by #14, we again obtain the Eq. 1.13.12 which is the
inverse Laplace transform of the function J =.
Although there was only a single pole in our J a=b in Eq. 1.13.14 the result obtained
is valid in the general case, when J a=b has 8 poles and 8 " zeros.
Thus we have proved the _" transform by means of a contour integral for positive
real functions. As in Eq. 1.13.12, here, too, the abscissa of absolute convergence 5a must be
chosen so that de=f 5a and also > ! in order to ensure the convergence of the integral.
However, we may also integrate along a straight path, where 5 5a , provided that all the
poles remain on the left side of the path.
From all the complicated equations above the reader must remember only one
important fact, which we will use very frequently in the following sections: By means of
the _" transform of J =, the complex transfer function of a linear network, we
obtain the real time function, 0 >, as the sum of the residues of all the poles of the
complex frequency function J = e=> .
Let us put this in the symbolic form:
0 > _" eJ =f ! res J = e=>
-1.72-
(1.13.20)
P. Stari, E.Margan
ii =
iC
1V
R
The input current is composed of two components, the current through the capacitor
MG , and through the inductor MP (and the resistor V) and Zi is the input voltage:
Mi MG MP Zi = G
Zi
=# P G = V G "
Zi
=PV
=PV
(1.14.1)
Correspondingly:
Zi
=PV
#
Mi
= PG =VG "
(1.14.2)
This is a typical input function [Ref. 1.16], in this case it has the form of an (input)
impedance, ^i . The characteristics of an input function is that the number of poles exceeds
the number of zeros by one only. The output voltage Zo is:
Zo Z i
and so:
V
=PV
Zo
V
=PV
V
#
Mi
= P V =# P G = V G "
= PG =VG "
(1.14.3)
(1.14.4)
The result is the transfer function of the network (from input to output, but is
expressed as the output to input ratio). Since the dimension of Eq. 1.14.4 is (complex)
Ohms it is also named the transimpedance. In general we will assume that the input current
is " VV, in order to obtain a normalized transfer function:
Zo
" [V]
K=
=# P G = V G "
-1.73-
(1.14.5)
P. Stari, E.Margan
In our later applications of the circuit in Fig. 1.14.1 the denominator of Eq. 1.14.5
must have complex roots (although, in general, the roots can also be real). Now let us
calculate both roots of the denominator from its canonical form:
=# =
with the roots:
V
"
!
P
PG
V
V#
"
#P
% P#
PG
(1.14.6)
(1.14.7)
In special cases, some of which we shall analyze in the later parts of the book, the roots
may also be double and real.
Expressing the transfer function, Eq. 1.14.5, by its roots, we obtain:
K=
"
"
PG
= =" = =#
(1.14.8)
From the _" transform of this function we obtain the systems impulse response in
the time domain, g> ge$ >f. The factor "PG is the system resonance, =#" , which in a
different network may take a different form (in the general normalized second-order case it
is equal to the product of the two poles, =" =# ). Thus, we put O "PG :
g> _" eK=f _"
O
= =" = =#
O
e=>
e=>
.= O " res
(
#14
= =" = =#
= =" = =#
(1.14.9)
e=>
e=" >
= =" = =#
=" =#
res# =lim
= =#
p=
e=>
e=# >
= =" = =#
=# ="
"
-1.74-
(1.14.10)
P. Stari, E.Margan
| G (s ) |
0
-3
|G ( j ) |
-2
{s }
-1
s2
s1
-1
-2
-3
{s}
Fig. 1.14.2: The magnitude of the system transfer function, Eq. 1.14.8, for ="# a" 4 b#
and O ". For d= !, the surface lK=l is reduced to the frequency responses magnitude
curve, lK4 =l. The height at ="# is _, but was limited to 3 in order to see lK4 =l in detail.
O
e=" > e=# >
=" =#
(1.14.11)
Now we insert 5" 4 =" for =" and 5" 4 =" for =# :
g>
O
e5" 4 =" > e5" 4 =" >
5" 4 =" 5" 4 ="
(1.14.12)
O
O 5" > e4=" > e4 =" >
e5" > e4 =" > e4 =" >
e
# 4 ="
="
#4
Since:
then:
g>
O 5" >
e sin =" >
="
(1.14.13)
(1.14.14)
(1.14.15)
g>
"
=" =# 5"# ="#
PG
where ) is the angle between a pole and the positive 5 axis, as in Fig. 1.13.4.
-1.75-
(1.14.16)
(1.14.17)
P. Stari, E.Margan
In our example, 5"# =#" " (Butterworth case), so Eq. 1.14.17 can be simplified:
g>
(1.14.18)
Note that Eq. 1.14.13 and Eq. 1.14.17 are valid for any complex pole pair, not
just for Butterworth poles. This completes the calculation of the impulse response.
The next case, in which we are interested more often, is the step response. In
Example 1, Sec. 1.5, we have calculated that the unit step function in the time domain
corresponds to "= in the frequency domain. To obtain the step response in the time
domain, we need only to multiply the frequency response by "= and calculate the inverse
_ transform of the product. So by multiplying K= by "= we obtain a new function:
J =
"
O
K=
=
= = =" = =#
(1.14.19)
To calculate the step response in the time domain we use the _" transform:
0 > _" eJ =f _"
= = =" = =#
O
e=>
e=>
.= O " res
(
(1.14.20)
#14
= = =" = =#
= = =" = =#
G
The difference between Eq. 1.14.9 and Eq. 1.14.20 is that here we have an additional
pole =! !, because of the factor "=. Thus here we have three residues:
res! lim =
=p!
e=>
"
= = =" = =#
=" =#
res" =lim
= ="
p=
e=>
e=" >
= = =" = =#
=" =" =#
res# =lim
= =#
p=
e=>
e=# >
= = =" = =#
=# =# ="
"
(1.14.21)
In the double-pole case (coincident pole pair, =" =# ) the calculation is different
(remember Eq. 1.11.12) and it will be shown in several examples in Part 2. The time
domain function is the sum of all three residues (O=" =# is factored out):
0 >
O
=#
="
e=" >
e=# >
"
=" =#
=" = #
=# = "
(1.14.22)
By expressing =" 5" 4 =" and =# 5" 4 =" in each of the residues we obtain:
O
O
O
#
"
5" =#"
=" =#
5" 4 =" 5" 4 ="
-1.76-
(1.14.23)
P. Stari, E.Margan
0 > "
(1.14.24)
By factoring out e5" > , and with a slight rearranging, we arrive at:
0 > " e5" >
(1.14.25)
Since e4 =" > e4 =" > # 4 sin =" > and e4=" > e4 =" > # cos =" > we can
simplify Eq. 1.14.25 into the form:
0 > " e 5" >
5"
sin =" > cos =" >
="
(1.14.26)
We could now numerically calculate the response, but we want to show two things:
1) how the formula relates to the physical circuit behavior;
2) explain an error, all too often ingored (even by experienced engineers!).
We can further simplify the sinecosine term by using the vector sum of the two
phasors (this relation can be found in any mathematics handbook):
E sin ! F cos ! E# F # sin ! )
where
) arctanaFEb
) arctan
5"
5 >
e " sin =" > )
="
(1.14.27)
="
5"
For the Buttreworth case, the square root is equal to #, but in the general case it is:
"
#
5"# =#"
5"
"
sin )
="
="
(1.14.28)
Note that for any value of 5" and =" their square can never be negative, which is reflected
in the absolute value notation at the end; on the other hand, it is important to preserve the
correct sign of the phase shifting term in sin =" > ). By putting Eq. 1.14.28 back into
Eq. 1.14.27 we obtain a relatively simple expression:
0 > "
"
5 >
e " sin =" > )
sin )
(1.14.29)
If we now insert the numerical values for 5" , =" and ) and plot the function for > in
the interval from 0 to 10, the resulting graph will be obviously wrong! What happened?
-1.77-
P. Stari, E.Margan
Let us check our result by applying the rule of initial and final value from Sec. 1.6.
We will use Eq. 1.14.29 and Eq. 1.14.8, considering that O =" =# (Eq. 1.14.16).
1. Check the initial value in the frequency-domain, = p _:
0 ! =lim
= J = =lim
p_
p_
=" =#
!
= =" = =#
(1.14.30)
"
e5" ! sin =" ! ) #
lsin )l
(1.14.31)
which is wrong!
2. Check the final value for > p _:
0 _ lim "
>p_
= J !
"
e5" > sin =" > )
lsin )l
=" =#
"
! =" ! =#
(1.14.32)
and at least this one is correct in both the time and frequency domain. Note that in both
checks the pole at = ! is canceled by the multiplication of J = by =.
Considering the error in the inital value in the time domain, many engineers
wrongly assume that they have made a sign error and change the time domain
equation to:
"
5 >
0 > "
(wrong!)
e " sin =" > )
sin )
Although the step response plot will now be correct, a careful analysis shows that
the negative sign is completely unjustified! Instead we should have used:
) 1 arctan
="
5"
(1.14.33)
The reason for the added 1 lies in the tangent function, which repeats with a period
of 1 radians (and not #1, as the sine and cosine do). This results in a lost sign since
the arctangent can not tell between angles in the first quadrant from those in the
third and in the second quadrant from those in fourth. See Appendix 2.3 for more of
such cases in 3rd - and 4th -order systems.
A graphical presentation of the step response solution, given by Eq. 1.14.29 and
with the correct initial phase angle, Eq. 1.14.33, is displayed in Fig. 1.14.3.
The physical circuit behavior can be explained as follows:
-1.78-
P. Stari, E.Margan
The system resonance term, sin=" >, is first shifted by ), the characteristic angle of
the pole, becoming sin=" > ) (the time shift is )=" ). At resonance the voltage and
current in reactive components are each others derivatives (a sinecosine relationship, see
Eq. 1.14.26), the initial phase angle ) reflects their impedance ratios.
The amplitude of the shifted function is then corrected by the absolute value of the
function at > !, which is l"sin )l. Thus the starting value is equal to ", and in addition
the slope is precisely identical to the initial slope of the exponential damping function, e5" > ,
so that their product has zero initial slope.
This product is the system reaction to the unit step excitation, 2>, which sets the
final value for > p _ (=! !). By summing the residue at =! (res0 " ) with the reaction
function gives the final result, the step response 0 >.
1
h (t )
1.0
e1 t
0.5
f (t )
1 sin (1 t + )
| sin |
1.5
a)
s1 s 2
sin (1 t + )
t
0.5
1 = 1
1 = 1
j
s1
s2
10
j
s0
1.5
1.5
h( t ) =
1.0
b)
0.5
1.0
0, t < 0
1, t > 0
f (t ) = 1 +
0.5
e1 t sin ( t + )
1
| sin |
1.0
e1 t sin ( t + )
1
| sin |
4
t
7
s1
10
s0
e1 t
sin (1 t + )
| sin |
s2
1.5
Fig. 1.14.3: Step by step graphic representation of the procedures used in the calculation of
the step response of a system with two complex conjugate poles.
We have purposely presented the complete calculation of the step response for the
VPG circuit in every detail for two reasons:
1) to show how the step response is calculated by means of the _" transform and
the theory of residues; and
2) because we shall meet such functions very frequently in the following parts.
-1.79-
P. Stari, E.Margan
1.15 Convolution
In network analysis we often encounter a cascade of networks, so that the ontput of
the preceeding network is driving the input of the following one. The output of the later
network is therefore a response to the response of the preceeding network. We need a
procedure to solve such problems. In the time domain this is done by the convolution
integral [Ref. 1.2]. Fig. 1.15.1 displays the complete procedure of convolution.
In Fig. 1.15.1a there are two networks:
The network E has a Bessel pole pair with the following data:
=",# 5" 4 =" "&!! 4!)''; the pole angle is )" "&!.
In addition, owing to the input unit step function, we have a third pole =! !.
The network F has a Butterworth pole pair with the following data:
=$,% 5# 4 =# !(!(" 4!(!("; the pole angle is )# "$&.
Bessel and Butterworth poles are discussed in detail in Part 4 and Part 6.
According to Eq. 1.14.29 the step response of the network E is:
0 > "
"
e5" > sin =" > )"
lsin )" l
and, according to Eq. 1.14.17, the impulse response of the network F is:
g>
"
e5# > sin =# >
lsin )# l
Both functions are shown in Fig. 1.15.1a. We will convolve ga>b because it is easier
to do so. This convolving (folding) is done by time reversal about > !, obtaining
ga7 >b. The reversion interval 7 has to be chosen so that g> 7 ! (or at least very
close to zero), otherwise the convolution integral would not converge to the correct final
value. The output function C> is then the convolution integral:
>max
C> (
!
"
"
e5" > sin =" > )"
e5# 7 > sin =# 7 > .>
"
sin
)
sin
)# l
l
l
l
"
0 >
g7 >
(1.15.1)
To solve this integral requires a formidable effort and the reader may be assured that
we shall not attempt to solve it here, because as we will see later there is a more
elegant method of doing so. We have written the complete integral merely to give the
reader an example of the convolution based on the functions which we have already
calculated. Nevertheless, it is a challenge for the reader who wants to do it by himself (for
the construction of diagrams in Fig. 1.15.1, this integral has been solved!).
In Fig. 1.15.1b we first convolve the function g> and introduce the time constant 7
to obtain g7 >. Next, in Fig. 1.15.1c the function g7 > is shifted right along the time
axis to the position > ", obtaining g7 > ". The area E" under the product of the two
signals is the value of the convolution integral for the interval ! > ".
In a similar fashion, in Fig. 1.15.1d the function g7 > is shifted to > #. Here
the value of the convolution integral for the interval ! > # is equal to the area E# .
-1.81-
P. Stari, E.Margan
A
h( t )
s3
s1 s2
s0
s4
(t )
f (t )
g( t )
f (t )
g( t )
t = ti
f ( t ) g( t ) d t
( t ) = f ( t ) * g( t ) =
t=0
t
0
b
1
f (t )
Convolution
(folding) of g ( t )
A 0 = f ( t ) g( t ) = 0
g( t )
f (t)
Shift and
integrate
g( t )
g( 1 t )
A1
g( t )
0
5 4
3 2
t
0
5 4
t
0
f (t)
g(3 t )
g( 2 t )
A3
A2
f (t )
3 2
d
1
5 4
5 4
f (t)
f (t )
g( 4 t )
g(5 t )
A4
A5
5 4
t
0
5 4
t
0
(t )
A3 A4 A
5
f (t )
t = ti
A2
f ( t ) g( t ) d t
A i = ( ti) =
t=0
g( t )
A1
A0
-1.82-
P. Stari, E.Margan
In Fig. 1.15.1e, the function g7 > is shifted to > $ to obtain the area E$ and in
Fig. 1.15.1f, the function g7 > is shifted to > %, resulting in the area E% , which is in
part negative, owing to the shape of g7 >.
In Fig. 1.15.1g, > & and the area E& is obtained. Since 0 > has nearly reached its
final value and g> is almost zero for > &, any further shifting changes E> only slightly.
Finally, in Fig. 1.15.1h the values of E" , , E5 are inserted to point to the
particular values of the output function C>. For comparison, the input of network F , 0 >,
is also drawn. Although 0 > has almost no overshoot, the Butterworth poles in the network
F cause an undershoot in g>, which results in an overshoot in the output signal C>.
Important note: In the last plot of Fig. 1.15.1 the system response C> is plotted as
if the network F had a unity gain. The impulse response of a unity gain system is
characterized by the whole area under it being equal to 1; consequently, its peak
amplitude would be very small compared to 0 >, so there would not be much to
see. Therefore for g> we have plotted its ideal impulse response. The
normalization to a unity-gain is acomplished by dividing the ideal impulse-response
by its own time integral (numerically, each instantaneous amplitude sample is
divided by the sum of all samples). See Part 6 and Part 7 for more details.
From Eq. 1.6.51, it has become evident that convolution in the time domain
corresponds to a simple frequency domain multiplication. This is also shown in Fig. 1.15.2.
The upper half of the figure is the = domain whilst the bottom half is the > domain. Instead
of making the convolution g> 0 > in the > domain, which is difficult, (see Eq. 1.15.1 ),
we rather perform a simple multiplication K= J = in the = domain. Then, by means of
the _" transform we obtain the function C> which we are looking for.
G (s )
system
transform
s domain
t domain
g (t )
system
transfer function
algebraic
equation
(easy)
F (s )
Y (s )
signal
transform
response
transform
1
transform
integral
equation
(difficult)
transform
f (t )
(t )
signal
response
Fig. 1.15.2: Equivalence of the system response calculation in the time domain, 0 >g>,
and the frequency domain, J = K=. For analytical work the transform route is the easy
way. For computer use the direct method is preferred.
-1.83-
P. Stari, E.Margan
By taking the transform route we need only to calculate the sum of all the residues
(five in the case shown in Fig. 1.15.1), which is far less difficult than the calculation of the
integral in Eq. 1.15.1.
The mathematical expression, which applies to this case, this is:
C> _" eK= J =f
=" =#
=$ = %
=>
" res
e
=
=
=
=
=
=
=
"
#
$ = =%
J =
K=
(1.15.2)
Here the numerators of both fractions have been normalized by introducing the products
=" =# and =$ =% respectively, to replace the constant O (according to Eq. 1.14.16) in the
Eq. 1.14.19 and 1.14.8. A solution of the above equation can be found in Part 2, Sec. 2.6.
Fig. 1.15.2 also reveals another very important possibility. If the input signal 0 > is
known and a certain output signal C> is desired, we can synthesize (not always!) the
intermediate network K= by taking the _ transform of both time functions and calculating
their quotient:
K=
] =
J =
(1.15.3)
-1.84-
P. Stari, E.Margan
Rsum of Part 1
So far we have discussed the Laplace transform and its inverse, only to the extent
which the reader needs for understanding the rest of the book.
Since we shall calculate many practical examples of the _" transform in the
following chapters, we have discussed extensively only the calculation of the time function
of a simple two pole network with a complex conjugate pole pair, excited by the unit step
function.
The readers who want to broaden their knowledge of the Laplace transform, can
find enough material for further study in the references quoted.
-1.85-
P. Stari, E.Margan
References:
[1.1]
[1.2]
[1.3]
M.F. Gardner & J.L. Barnes, Transients in Linear Systems Studied by Laplace Transform,
Twelfth Printing, John Wiley & Sons, New York, 1956.
[1.4]
O. Follinger
, Laplace und Fourier Transformation,
[1.5]
[1.6]
G. Doetsch, Anleitung zum praktischen Gebrauch der Laplace Transformation und der
ZTransformation, R. Oldenburg Verlag, MunichVienna, 1985.
[1.7]
T.F. Bogart, Jr., Laplace Transforms and Control Systems, Theory for Technology,
John Wiley & Sons, New York, 1982.
[1.8]
M. OFlynn & E. Moriarthy, Linear Systems, Time Domain and Transform Analysis,
John Wiley, New York, 1987.
[1.9]
G.A. Korn & T.M. Korn, Mathematical Handbook for Scientists and Engineers,
McGraw-Hill, New York, 1961.
[1.10]
[1.11]
[1.12]
[1.13]
[1.14]
R.W. Churchil & J.W. Brown, Complex Variables and Applications, Fourth Edition,
International Student Edition, McGraw-Hill, Auckland, 1984.
[1.15]
W. Gellert, H. Kustner,
M. Hellwich, & H. Kastner
, The VNR Concise Encyclopedia of
[1.16]
[1.17]
P. Stari, Proof of the Inverse Laplace Transform for Positive Real Functions,
Elektrotehniki vestnik, Ljubljana, 1991, pp. 2327.
[1.18]
[1.19]
[1.20]
[1.21]
[1.22]
[1.23]
-1.87-
P. Stari, E. Margan
Wideband Amplifiers
Part 2:
P. Stari, E. Margan
-2.2-
P. Stari, E. Margan
2.35
2.42
2.43
2.43
2.45
2.46
2.48
2.63
2.66
2.68
2.69
2.69
-2.3-
2.91
2.96
2.97
2.97
2.99
P. Stari, E. Margan
2.10 Comparison of MFA Frequency Responses and of MFED Step Responses ...................................... 2.103
2.11 The Construction of T-coils .............................................................................................................. 2.105
References ................................................................................................................................................. 2.111
Appendix 2.1: General Solutions for 1st -, 2nd -, 3rd - and 4rd -order Polynomials ..................................... A2.1.1
Appendix 2.2: Normalization of Complex Frequency Response Functions ............................................ A2.2.1
Appendix 2.3: Solutions for Step Responses of 3rd - and 4th -order Systems ................................... (CD) A2.3.1
Appendix 2.4: Table 2.10 Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1
List of Figures:
Fig. 2.1.1: A common base amplifier with VG load ..................................................................................... 2.9
Fig. 2.1.2: A hypothetical ideal rise time circuit ......................................................................................... 2.10
Fig. 2.1.3: A common base amplifier with the series peaking circuit .......................................................... 2.11
Fig. 2.2.1:
Fig. 2.2.2:
Fig. 2.2.3:
Fig. 2.2.4:
Fig. 2.2.5:
Fig. 2.2.6:
Fig. 2.2.7:
Fig. 2.2.8:
Fig. 2.2.9:
Fig. 2.3.1:
Fig. 2.3.2:
Fig. 2.3.3:
Fig. 2.3.4:
Fig. 2.3.5:
Fig. 2.3.6:
Fig. 2.4.1: The basic T-coil circuit and its equivalent ................................................................................. 2.35
Fig. 2.4.2: Modeling the coupling factor ..................................................................................................... 2.35
Fig. 2.4.3: The poles and zeros of the all pass transimpedance function .................................................... 2.40
Fig. 2.4.4: The complex conjugate pole pair of the Bessel type ................................................................. 2.40
Fig. 2.4.5: The frequency response magnitude of the T-coil circuit ............................................................ 2.43
Fig. 2.4.6: The phase response of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.7: The envelope delay of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.8: The step response of the T-coil circuit, taken from G ............................................................... 2.45
Fig. 2.4.9: The step response of the T-coil circuit, taken from V ............................................................... 2.48
Fig. 2.4.10: An example of a system with different input impedances ......................................................... 2.49
Fig. 2.4.11: Input impedance compensation by T-coil sections ................................................................... 2.50
Fig. 2.5.1: The three-pole T-coil network ................................................................................................... 2.51
Fig. 2.5.2: The layout of Bessel poles for Fig.2.5.1 .................................................................................... 2.51
Fig. 2.5.3: The basic trigonometric relations of main parameters for one of the poles ............................... 2.52
Fig. 2.5.4: Three-pole T-coil network frequency response ......................................................................... 2.55
Fig. 2.5.5: Three-pole T-coil network phase response ................................................................................ 2.56
Fig. 2.5.6: Three-pole T-coil network envelope delay ................................................................................ 2.56
Fig. 2.5.7: The step response of the three-pole T-coil circuit ..................................................................... 2.58
Fig. 2.5.8: Low coupling factor, Group 1: frequency response .................................................................. 2.60
Fig. 2.5.9: Low coupling factor of Group 1: step response ........................................................................ 2.60
Fig. 2.5.10: Low coupling factor of Group 2: frequency response ............................................................... 2.61
Fig. 2.5.11: Low coupling factor of Group 2: step response ........................................................................ 2.61
Fig. 2.6.1: The four-pole L+T network ....................................................................................................... 2.63
Fig. 2.6.2: The Bessel four-pole pattern of L+T network ........................................................................... 2.63
Fig. 2.6.3: Four-pole L+T peaking circuit frequency response ................................................................... 2.67
-2.4-
P. Stari, E. Margan
Fig. 2.6.4:
Fig. 2.6.5:
Fig. 2.6.6:
Fig. 2.6.7:
Fig. 2.6.8:
Additional frequency response plots of the four-pole L+T peaking circuit ............................... 2.67
Four-pole L+T peaking circuit phase response .......................................................................... 2.68
Four-pole L+T peaking circuit envelope delay .......................................................................... 2.69
Four-pole L+T circuit step response .......................................................................................... 2.72
Some additional four-pole L+T circuit step responses .............................................................. 2.72
Fig. 2.7.1:
Fig. 2.7.2:
Fig. 2.7.3:
Fig. 2.7.4:
Fig. 2.7.5:
Fig. 2.7.6:
Fig. 2.8.1:
Fig. 2.8.2:
Fig. 2.8.3:
Fig. 2.8.4:
Fig. 2.8.5:
Fig. 2.9.1:
Fig. 2.9.2:
Fig. 2.9.3:
Fig. 2.9.4:
Fig. 2.9.5:
Fig. 2.9.6:
Fig. 2.9.7:
Fig. 2.10.1: MFA frequency responses of all peaking circuits .................................................................. 2.104
Fig. 2.10.2: MFED step responses of all peaking circuits ......................................................................... 2.104
Fig. 2.11.1:
Fig. 2.11.2:
Fig. 2.11.3:
Fig. 2.11.4:
Fig. 2.11.5:
Fig. 2.11.6:
Four-pole L+T circuit step response dependence on component tolerances .......................... 2.105
T-coil coupling factor as a function of the coil length to diameter ratio ................................ 2.106
Form factor as a function of the coil length to diameter ratio ................................................ 2.108
Examples of planar coil structures ......................................................................................... 2.109
Compensation of a bonding inductance by a planar T-coil .................................................... 2.109
A high coupling T-coil on a double sided PCB ..................................................................... 2.110
List of Tables:
Table 2.2.1: Second-order series peaking circuit parameters ..................................................................... 2.26
Table 2.3.1: Third-order series peaking circuit parameters .......................................................................... 2.34
Table 2.4.1: Two-pole T-coil circuit parameters ......................................................................................... 2.48
Table 2.5.1: Three-pole T-coil circuit parameters ....................................................................................... 2.59
Table 2.6.1: Four-pole L+T peaking circuit parameters .............................................................................. 2.71
Table 2.7.1: Two-pole shunt peaking circuit parameters ............................................................................. 2.81
Table 2.8.1: Three-pole shunt peaking circuit parameters ........................................................................... 2.89
Table 2.9.1: Seriesshunt peaking circuit parameters ................................................................................ 2.101
Appendix 2.4: Table 2.10 Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1
-2.5-
P. Stari, E. Margan
2.0 Introduction
In the early days of wideband amplifiers suitable coils were added to the load
(consisting of resistors and stray capacitances) in order to extend the bandwidth, causing in
most cases a resonance peak in the frequency response. Hence the term inductive peaking.
Even though later designers of wideband amplifiers were more careful in doing their best
to achieve as flat a frequency response as possible, the word peaking still remained and it
is used today as well.
In some respect the British engineer S. Butterworth might be considered the first to
introduce coils in the (then) anode circuits of electronic tubes to construct an amplifier with
a maximally flat frequency (low pass) response. In his work On the Theory of Filter
Amplifiers, published as early as October 1930 [Ref. 2.1], besides introducing the pole
placement which was later named after him, he also mentioned: The writer has
constructed filter units in which the resistances and inductances are wound round a
cylinder of length 3in and diameter 1.25 in, whilst the necessary condensers are contained
within the core of the cylinder. However, it is hard to tell exactly the year when these
necessary condensers were omitted to leave only the stray and inter-electrode
capacitances of the electronic tubes to form, together with the properly dimensioned coils
and load resistances, a wideband amplifier with maximally flat frequency response. This
was probably done some time in the mid 1930s, when the first electronic voltmeters,
oscilloscopes, and television amplifiers were constructed.
The need for wideband and pulse amplifiers was emphasized with the introduction
of radar during the Second World War. A book of historical value, G. E. Valley & H.
Wallman, Vacuum Tube Amplifiers [Ref. 2.2] was written right after the war and
published in 1948. Apart from details about other types of amplifiers, the most important
knowledge about wideband amplifiers, gained during the war in the Radiation Laboratory
at Massachusetts Institute of Technology, was made public. In this work the amplifier step
response calculation also received the necessary attention.
After the war people who worked in the Radiation Laboratory spread over USA
and UK, and many of them started working at firms where oscilloscopes were produced.
Many articles were written about wideband amplifiers with inductive peaking, but books
which would thoroughly discuss wideband amplifiers were almost non-existent. The reason
was probably because the emphasis has shifted from the frequency domain to the time
domain, where a gap-free mathematical discussion was considered difficult. Nevertheless,
here and there a book on this subject appeared, and one of the most significant was
published in 1957 in Prague: J. Bednak & J. Dank, Obrazov zesilovae pro televis a
mic techniku, (Video Amplifiers for Television and Measuring Techniques) [Ref. 2.3].
There the authors attempted to present a thorough discussion of all inductive peaking
circuits known at that time and also of high frequency resonant amplifiers. Computers were
a rare comodity in those days, with restricted access and equally rare was the programming
knowledge; this prevented the authors from executing some important calculations, which
are too elaborate to be done by pencil and paper.
An important change in wideband amplifier design, using inductive peaking, was
introduced by E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, and J.D. Noe in their revolutionary
article Distributed Amplification, [Ref. 2.4]. This was an amplifier with electronic tubes
connected in parallel, where the grid and anode interconnections were made of lumped
-2.7-
P. Stari, E. Margan
sections of a delay line. In this way the bandwidth of the amplifier was extended beyond
the limits imposed by the mutual conductance (gm ) divided by stray capacitance (Gin ) of
electronic tubes. For reasons which we will discuss in Part 3, this type of amplification has
a rather limited application if transistors are used instead of electronic tubes. The necessary
delay in a distributed amplifier was realized using the so-called m-derived T-coils, which
did not have a constant input impedance. The correct T-coil circuit was developed in 1964
by C.R. Battjes [Ref. 2.17] and was used for inductive peaking of wideband amplifiers.
Compared with a simple series peaking circuit, a T-coil circuit improves the bandwidth and
rise time exactly twofold. For many years the T-coil peaking circuits were considered a
trade secret, so the first complete mathematical derivations were published by a pupil of
C.R. Battjes only in the early 1980s [Ref. 2.5, 2.6] and in 1995 by C. R. Battjes himself
[Ref. 2.18]. Transistor inter-stage coupling with T-coils represented a special problem,
which was solved by R.I. Ross in late 1960s. This too was considered a classified matter
and appeared in print some ten to twenty years later [Ref. 2.7, 2.8, 2.9]. Owing to the
superb performance of the T-coil circuit we shall discuss it very thoroughly. The transistor
inter-stage T-coil coupling will be derived in Part 3.
Here in Part 2, we shall first explain the basic idea of inductive peaking, followed
by the discussion of the peaking circuits with poles only: series peaking two-pole, series
peaking three-pole, T-coil two-pole, T-coil three-pole, and L+T four-pole circuits. This will
be followed by peaking circuits with poles and zeros: shunt peaking two-pole and one-zero
circuit, shunt peaking three-pole and two-zero circuit, and shuntseries peaking circuit. For
each of the circuits discussed we shall calculate and plot the frequency, phase, envelope
delay, and the step response. The emphasis will be on T-coil circuits, owing to their superb
performance. All the necessary calculations will be explained as we proceed and, whenever
practical, the complete derivations will be given. The exception is the step response of the
series peaking circuit with one complex conjugate pole pair, which was already derived and
explained in Part 1. Since the complete calculation for the step-responses of four-pole L+T
circuits and shuntseries peaking circuits is rather complicated, only the final formulae will
be given. Those readers who want to have the derivations for these circuits as well, will be
able to do so themselves by learning and applying the principles derived in Part 1 and 2
(some assistance can also be found in Appendix 2.1, 2.2 and 2.3).
To the beginners we strongly recommend the study of Sec. 2.2 and 2.3: the circuit
examples are simple enough to allow the analysis to be easily followed and learned; the
same methods can then be applied to more sophisticated circuits in other sections, in which
some of the most basic details are omitted and some equations imported from those two
sections.
At the end of Part 2 we shall draw two diagrams, showing the Butterworth (MFA)
frequency responses and the Bessel (MFED) step responses, to offer an easy comparison of
performance. Finally, in Appendix 2.4 we give a summary table containing the essential
design parameters and equations for all the circuits discussed.
-2.8-
P. Stari, E. Margan
Q1
Ccb
is
= RC
1.0
0.9
ic
Cs
CL
V bb
= 1 e t /RC
V cc
0.1
0.0
Ccb + Cs + CL = C
ic R
0.5
r 1 = 2.2 RC
0
t1
t2
t / RC
Fig. 2.1.1: A common base amplifier with VG load: the basic circuit and its step response.
Because of these capacitances, the output voltage @o does not jump suddenly to the
value 3c V , where 3c is the collector current. Instead this voltage rises exponentially
according to the formula (see Part 1, Eq. 1.7.15):
@o 3c V " e>VG
(2.1.1)
The time elapsed between 10 % and 90 % of the final output voltage value (3c V ),
we call the rise time, 7r1 (the index 1 indicates that it is the rise time of a single-pole
circuit). We calculate it by inserting the 10 % and 90 % levels into the Eq. 2.1.1:
!" 3c V 3c V " e>" VG
>" V G ln !*
(2.1.2)
!* 3c V 3c V " e># VG
># V G ln !"
(2.1.3)
!*
## V G
!"
(2.1.4)
The value ## VG is the reference against which we shall compare the rise time of all
other circuits in the following sections of the book.
-2.9-
P. Stari, E. Margan
Since in wideband amplifiers we strive to make the output voltage a replica of the
input voltage (except for the amplitude), we want to reduce the rise time of the amplifier as
much as possible. As the output voltage rises more current flows through V and less
current remains to charge G . Obviously, we would achieve a shorter rise time if we could
disconnect V in some way until G is charged to the desired level. To do so let us introduce
a switch W between the capacitor G and the load resistor V . This switch is open at time
> ! , when the current step starts, but closes at time > VG , as in Fig. 2.1.2. In this way
we force all the available current to the capacitor, so it charges linearly to the voltage 3c V .
When the capacitor has reached this voltage, the switch W is closed, routing all the current
to the loading resistor V .
Q1
ic
vo
1.0
0.9
t = RC
S
0.5
C R
0.1
0.0
vo
=
ic
iR =
0
is
iC = 0
= RC
iR = ic
t
C 0 < t < RC
t > RC
r0
Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
0
1
2
3
4
of 3c is available to charge G ; but after a time > VG the switch is closed and all 3c flows through V.
t
t
The resulting output voltage is shown in b, compared1 with the 2exponential response in a. t / RC
Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
of 3c is available to charge G ; but after a time > VG the switch is closed and all 3c flows through V.
The resulting output voltage is shown in b, compared with the exponential response in a.
By comparing Fig. 2.1.1 with Fig. 2.1.2 , we note a substantial decrease in rise time
7r! , which we calculate from the output voltage:
7
@o
>7
"
3c
> 3 c V
( 3c .>
G
G >!
(2.1.5)
where 7 VG . Since the charging of the capacitor is linear, as shown in Fig. 2.1.2, the rise
time is simply:
7r! !* VG !" VG !) VG
(2.1.6)
In comparison with Fig. 2.1.1, where there was no switch, the improvement factor
of the rise time is:
(r
7r"
##! VG
#(&
7r!
!) VG
(2.1.7)
It is evident that the rise time (Eq. 2.1.6 ) is independent of the actual value of
the current 3c , but the maximum voltage 3c V (Eq. 2.1.5) is not. On the other hand, the
smaller the resistor V the smaller is the rise time. Clearly the introduction of the switch W
would mean a great improvement. By using a more powerful transistor and a lower value
resistor V we could (at least in principle) decrease the rise time at a will (provided that G
remains unchanged). Unfortunately, it is impossible to make a low on-resistance switch,
-2.10-
P. Stari, E. Margan
functioning as in Fig. 2.1.2, which would also suitably follow the signal and automatically
open and close in nanoseconds or even in microseconds. So it remains only a nice idea.
But instead of a switch we can insert an appropriate inductance P between the
capacitor G and resistor V and so partially achieve the effect of the switch, as shown in
Fig. 2.1.3. Since the current through an inductor can not change instantaneously, more
current will be charging G , at least initially. The configuration of the VPG network allows
us to take the output voltage either from the resistor V or from the capacitor G . In the first
case we have a series peaking network whilst in the second case we speak of a shunt
peaking network. Both types of peaking networks are used in wideband amplifiers.
Q1
is
ic
o
= RC
1.0
0.9
C R
c
b
ic R
0.5
0.1
0.0
=1+
1 = 2RL
t1
1 =
= + arctan
r
0
e 1 sin ( t + )
1
|sin |
t2
R2
4L2
1
LC
1
1
t / RC
Fig. 2.1.3: A common base amplifier with the series peaking circuit. The output voltage @o
(curve c) is compared with the exponential response (a, P !) and the response using the
ideal switch (b). If we were to take the output voltage from the capacitor G , we would have a
shunt peaking circuit (see Sec. 2.7). We have already seen the complete derivation of the
procedure for calculating the step response in Part 1, Sec. 1.14. However, the response
optimization in accordance with different design criteria is shown in Sec. 2.2 for the series
peaking circuit and in Sec. 2.7 for the shunt peaking circuit.
Fig. 2.1.3 is the simplest series peaking circuit. Later, when we discuss T-coil
circuits, we shall not just achieve rise time improvements similar to that in Eq. 2.1.7, but in
cases in which it is possible (usually it is) to split G into two parts, we shall obtain a
substantially greater improvement.
-2.11-
P. Stari, E. Margan
ii
iC
C
In Fig. 2.2.1 we have repeated the collector loading circuit of Fig. 2.1.3. Since the
inductive peaking circuits are used mostly as collector load circuits, from here on we shall
omit the transistor symbol; instead we shall show the input current Mi (formerly Mc ) flowing
into the network, with the common ground as its drain. At first we shall discuss the
behavior of the network in the frequency domain, assuming that Mi is the RMS value of the
sinusoidally changing input current. This current is split into two parts: the current through
the capacitance MG , and the current through the inductance MP . Thus we have:
Mi MG MP Zi 4 = G
Zi
"
Zi 4 = G
4=PV
4=P V
(2.2.1)
where the input voltage Zi is the product of the driving current Mi and the input impedance
^i (represented by the expression in paretheses). The output voltage is:
Zo M P V Z i
V
4=PV
(2.2.2)
Zo
Mi
V
V
4=PV
"
4 = G 4 = P V "
Zi 4 = G
4=PV
Zi
V
=# P G V 4 = G "
(2.2.3)
Let us set M i " Z V and P 7V# G , where 7 is a dimensionless parameter; also let us
substitute 4 = with =. With these substitutions the output voltage Zo J a=b becomes:
J =
"
"
=# 7 V # G # = V G "
7 V# G #
-2.13-
"
=
"
=#
7VG
7 V# G #
(2.2.4)
P. Stari, E. Margan
The denominator roots, which for an efficient peaking must be complex conjugates,
as in Fig. 2.2.2, are the poles of J =:
"
"
"
#7VG
% 7# V# G #
7 V# G #
j 1
M
m = 0.25
m > 0.25
m = 0.33
s1
1
RC
m = 0.25
s1 = s2
m =0
m =0
s2 =
2
RC
j 1
s2
m = 0.5
s1
(2.2.5)
s 1 = 1
RC
s2
s2
Fig. 2.2.2: The poles =" and =# in the complex plane. If the parameter 7 !, the poles are
=" "VG and =# _. by increasing 7, they travel along the real axis towards each
other and meet at =" =# #VG for 7 !.25. Increasing 7 further, the poles split
into a complex conjugate pair travelling along the circle, the radius of which is < "VG and
its center at 5 <. The figure on the right shows the four characteristic layouts, which are
explained in detail in the text.
With these poles we may write Eq. 2.2.4 also in the following form:
J =
"
"
7VG
= =" = =#
(2.2.6)
"
"
7VG
=" =#
(2.2.7)
By dividing Eq. 2.2.6 by Eq. 2.2.7, we obtain the amplitude normalized transfer function:
=" = #
J =
(2.2.8)
= =" = =#
We shall need this expression for the calculation of the step response. But for the
frequency response J a4=b we replace both poles by their components from Eq. 2.2.5 and
group the imaginary parts to obtain:
J 4 =
5"# =#"
5" 4 = =" 5" 4 = ="
(2.2.9)
5"# =#"
5"# = =" # 5"# = =" #
(2.2.10)
The next step is the calculation of the parameter 7. Its value depends on the type of
poles we want to have, which in turn depend on the intended application of the amplifier.
As a general rule, for sine wave signal amplification we prefer the Butterworth poles whilst
-2.14-
P. Stari, E. Margan
for pulse amplification we prefer the Bessel poles. If high bandwidth is not of primary
importance, we can use a critically damped system for a zero overshoot step response.
Other types of poles are optimized for use in filters, in which our primary goal is to
selectively amplify only a part of the spectrum. Poles are discussed in Part 4 (derived from
some chosen optimization criteria) and Part 6 (computer algorithms).
2.2.1 Butterworth Poles for Maximally Flat Amplitude Response (MFA)
We shall calculate the actual values of the poles as well as the parameter 7, by
using Eq. 2.2.5 where we factor out "# 7VG . If the square root of Eq. 2.2.11 is
imaginary, which is true for 7 !#& , we can also factor out the imaginary unit:
=",#
"
"
" " %7
" 4%7 "
#7VG
#7VG
(2.2.11)
We now compare this relation with the normalized 2nd -order Butterworth poles (the
reader can find them in Part 4, Table 4.3.1, or by running the BUTTAP computer routine
given in Part 6). The values obtained are 5"t !(!(" and ="t !(!(".
Note: From now on we will append the index t to the poles taken from the
tables or calculated by a suitable computer program; these values are
normalized to the frequency of " radian per second.
Since both the real and imaginary axis of the Laplace plane have the
dimension of frequency, the pole dimension is radians per second [rads];
however, it has become almost a custom not to write the dimensions.
The sign is also seldom written; instead, most authors leave it to the
reader to keep in mind that the poles of unconditionally stable systems
always have the real part negative and the imaginary part is either zero or
both positive and negative, forming a complex conjugate pair.
To make it easier for the reader, we shall always have the symbols 5
and = signed as required by the mathematical operation to be performed,
whilst the numerical values within the symbols will always be negative for 5
and positive for =. For example, we shall express a complex conjugated pole
pair =" , =# =" , =" as:
=" 5" 4 =" !(!(" 4 !(!("
=# 5# 4 =# !(!(" 4 !(!("
=$ 5$ "!!!
Each 5i and =i will bear the index of the pole =i (and not their table
order number). We shall use the odd index for complex conjugate pair
components (with the appropriate sign for the imaginary part).
In order to have the same response, the poles of Eq. 2.2.11 must be proportional to
those from the tables, so the ratio of their imaginary to the real part must be the same:
-2.15-
P. Stari, E. Margan
%7 "
!(!("
="t
="
ee="t f
ee=" f
"
!(!("
"
5"t
5"
d e="t f
d e=" f
(2.2.12)
and the same is true for =# (except the sign). From the square root of Eq. 2.2.12 it follows
that the value of 7 which satisfies our requirement for the Butterworth poles must be:
Thus the inductance is:
7 !&
(2.2.13)
P 7 V# G !& V# G
(2.2.14)
Finally, by inserting the value of 7 back into Eq. 2.2.11 the poles of our system are:
=",# 5" 4 ="
"
"4
VG
(2.2.15)
The value "VG =h is equal to the upper half power frequency of the non-peaking
amplifier of Fig. 2.1.1 (at this frequency, since power is proportional to voltage squared, the
voltage gain drops to "# !(!(" ). If we put "VG " (or V " H and G " F,
or V &!! kH and G # .F, or any other similar combination, provided that it can be
driven by the signal source) , we obtain the normalized (denoted by the index n) poles:
="n,#n 5"n 4 ="n " 4
(2.2.16)
If we use normalized poles, we must also normalize the frequency: 4 ==h instead of 4 = .
Note: It is important not to confuse our system with normalized poles (Eq. 2.2.16)
with the system having normalized Butterworth poles taken from the table
( ="t , =#t !(!( 4 !(!( ). Although both are Butterworth-type and both are
normalized, they differ in bandwidth:
="t =#t "
="n =#n #
whilst
(2.2.17)
This will become evident soon in Sec. 2.2.4, where we shall calculate and plot the
magnitude (absolute value) of the frequency response.
2.2.2 Bessel Poles for Maximally Flat Envelope Delay (MFED) Response
From Table 4.4.3 in Part 4 (or by using the BESTAP routine in Part 6), the poles
for the 2nd -order Bessel system are 5"t "(&%% and ="t "&!!! . Then, as for the
Butterworth case above, the ratio of their imaginary to real component is:
ee=" f
="t
d e=" f
5"t
%7 "
"
"
$
"&!!!
"(&%%
(2.2.18)
(2.2.19)
P !$$ V# G
(2.2.20)
"
"& 4 !)''
VG
(2.2.21)
=",#
-2.16-
P. Stari, E. Margan
7 !#&
(2.2.22)
P !#& V# G
(2.2.23)
#
VG
(2.2.24)
=",#
In general the parameter 7 may be calculated with the aid of Fig. 2.2.2, where both
poles and the angle ) are shown. If the poles are expressed by Eq. 2.2.11:
tan )
%7 "
ee=" f
="
de=" f
"
5"
(2.2.25)
" tan# )
%
(2.2.26)
which is also equal to "% cos# ), as can be found in some literature. We prefer Eq. 2.2.26 .
Now we have all the data needed for further calculations of the frequency, phase,
time delay, and step responses.
2.2.4 Frequency Response Magnitude
We have already written the magnitude in Eq. 2.2.10. Here we will use the
normalized frequency ==h :
5"#n ="#n
J =
#
#
#
=
=
#
5
=
5
=
n
n
"
"
"
n
"
n
=h
=h
(2.2.27)
-2.17-
P. Stari, E. Margan
J =H
5"#
a=H
=" b# 5"#
a=H =" b
"
#
(2.2.28)
We shall use the term upper half power frequency intentionally, rather than the term
upper 3 dB frequency, which is commonly found in the literature. Whilst it has become a
custom to express the amplifier gain in dB, the dB scale (the log of the output-to-input
power ratio) implies that the driving circuit, which supplies the current Mi to the input, has
the same internal resistance as the loading resistor V . This is not the case in most of the
circuits which we shall discuss.
2.0
H
CD
Bessel
Butterworth
Vo
Ii R
1.0
0.7071
0.7
L=0
0.5
Ii
L = mR C
C
0.2
Vo
a ) m = 0.50
b ) m = 0.33
c ) m = 0.25
h = 1 / RC
c
a b
0.1
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.2.3: Frequency response magnitude of the two-pole series peaking circuit for some
characteristic values of 7: a) 7 !& is the maximally flat amplitude (MFA) response; b)
7 !$$ is the maximally flat envelope delay (MFED) response; c) 7 !#& is the critical
damping (CD) case; the non-peaking case (7 ! P !) is the reference. The bandwidth of all
peaking responses is improved compared to the non-peaking bandwidth =h at Zo Mi V !(!(".
For a series peaking circuit the calculation of =H is relatively easy. The calculation
becomes progressively more difficult for more sophisticated networks, where more poles
and sometimes even zeros are introduced. In such cases it is better to use a computer and in
Part 6 we have presented the development of routines which the reader can use to calculate
the various response functions.
If we solve Eq. 2.2.28 for =H =h we can define [Ref. 2.2, 2.4]:
(b
=H
=h
-2.18-
(2.2.29)
P. Stari, E. Margan
The value (b is the cut off frequency improvement factor, defined as the ratio of the
system upper half power frequency against that of the non-peaking amplifier (and, since the
lower half power frequency of a wideband amplifier is generally very low, usually it is flat
down to DC, we may call (b also the bandwidth improvement factor). In Table 2.2.1 at the
end of this section the bandwidth improvement factors and other data for different values
of the parameter 7 are given.
2.2.6 Phase Response
We calculate the phase angle : of the output voltage Zo referred to the input current
Mi by finding the phase shift :5 a=b of each pole =5 55 4 =5 and then sum them:
8
5"
5"
= =5
55
(2.2.30)
In Eq. 2.2.30 we have the ratio of the imaginary part to the real part of the pole, so
the pole values may be either exact or normalized. For normalized values we must also
normalize the frequency variable as ==h . Our frequency response function (Eq. 2.2.8) has
two complex conjugated poles, therefore the phase response is:
=
=
=1n
="n
=h
=h
arctan
:= arctan
51n
5"n
(2.2.31)
In Fig. 2.2.4 the phase plots corresponding to the same values of 7 as in Fig. 2.2.3
are shown:
0
20
40
[ ]
60
L=0
80
100
120
Ii
L = m R 2C
Vo
c
a ) m = 0.50
b ) m = 0.33
c ) m = 0.25
b
a
140
h = 1 / RC
160
180
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.2.4: Phase response of the series peaking circuit for a) MFA; b) MFED; c) CD case,
compared with the non-peaking response (P !). The phase angle scale was converted from
radians to degrees by multiplying it by 180/1. For = p _ the non-peaking (single-pole) response
has its asymptote at 90, whilst the second-order peaking systems have their asymptote at 180.
-2.19-
P. Stari, E. Margan
7:
:
=
(2.2.32)
If = is the positive angular frequency with which the input signal phasor rotates,
then the angle : by which the output signal phasor lags the input is defined in the direction
opposite to =, meaning that, for a phase-delay, : will be negative, as in Fig. 2.2.4;
consequently 7: will also be negative. Note that 7: has the dimension of time.
Now, 7: is obviously frequency dependent, so in order to evaluate the time domain
performance of a wideband amplifier on a fair basis we are much more interested in the
specific phase delay, known as the envelope delay (also group delay) and it is a frequency
derivative of the phase angle as the function of frequency:
7e
.:
.=
(2.2.33)
Here, too, a negative result means a delay and a positive result an advance against
the input signal. In Fig. 2.2.5 a tentative explanation of the difference between the phase
delay and the envelope delay is displayed both in time domain and as a phasor diagram.
Input envelope
t0
Output envelope
AV g
t0
vi
t1
vo
t2
t3
t
A ( )
( )
Vg
= V g sin ( t t0 ) t > t
0
o = A V g sin ( t ) t > t + 1/
0
50%
Vg
e =
d
d
AV g
t = t0 + N /
Fig. 2.2.5: Phase delay and envelope delay definitions. The switch W is closed at the instant >! ,
applying a sinusoidal voltage with amplitude Zg to the input of the amplifier having a frequency
dependent amplitude response E= and its associated phase response :=. The input signal
envelope is a unit step. The output envelope lags the input by 7/ .:.=, measured from >! to >" ,
where >" is the instant at which the output envelope reaches 50% of its final value. A number of
periods later (N=), the phase delay can be measured as the time between the input and output zero
crossing, indicated by ># and >$ , and is expressed as 7: :=. Note the phase lag being defined in
the oposite direction of the rotation => in the corresponding phasor diagram.
In the phase advance case, when zeros dominate over poles, the name suggests that
the output voltage will change before input, which is impossible, of course. To see what
actually happens we apply a sinewave to two simple VG networks, low pass and high pass,
as shown in Fig. 2.2.6. Compare the phase advance case, @oHP , with the phase delay case,
@oLP . The input signal frequency is equal to the network cutoff, "VG
-2.20-
P. Stari, E. Margan
oLP
t1LP
oHP
C
C
Vg
t0
oHP
oLP
t 1HP
Fig. 2.2.6: Phase delay and phase advance. It is evident that both output signals undergo a phase
modulation during the first half period. The time from >! and the first zero crossing of the output is
shorter for @oHP (>"HP ) and longer for @oLP (>"LP ). However, both envelopes lag the input envelope. On
the other hand, the phase, measured after a number of periods, exhibits an advance of 7: for the
high pass network and a delay of 7: for the low pass network.
Returning to the envelope delay for the series peaking circuit, in accordance with
Eq. 2.2.33 we must differentiate Eq. 2.2.30. For each pole we have:
.:
.
= =i
5i
arctan
#
.=
.=
5i
5i = =i #
(2.2.34)
and, as for the phase delay, the total envelope delay is the sum of the contributions of each
pole (and zero, if any). Again, if we use normalized poles and the normalized frequency,
we obtain the normalized envelope delay, 7e =h , resulting in a unit delay at DC.
For the 2-pole case we have:
5"n
7e =h
5"#n
=
="n
=h
5"n
5"#n
=
="n
=h
(2.2.35)
The plots for the same values of 7 as before, in accordance with Eq. 2.2.35, are
shown in Fig. 2.2.7.
For pulse amplification the importance of achieving a flat envelope delay cannot be
overstated. A flat delay means that all the important frequencies will reach the output with
unaltered phase, preserving the shape of the input signal as much as possible for the given
bandwidth, thus resulting in minimal overshoot of the step response (see the next section).
Also, a flat delay means that, since it is a phase derivative, the phase must be a linear
function of frequency up to the cutoff. This is why Bessel systems are often being referred
to as linear phase systems. This property can not be seen in the log scale used here, but if
plotted against a linearly scaled frequency it would be seen. We leave it to the curious
reader to try it by himself.
In contrast the Butterworth system shows a pronounced delay near the cut off
frequency. Conceivably, this will reveal the system resonance upon the step excitation.
-2.21-
P. Stari, E. Margan
0.0
Ii
0.2
L = m R 2C
Vo
a ) m = 0.50
b ) m = 0.33
c ) m = 0.25
L=0
0.4
e h
h = 1 / RC
0.6
c
0.8
1.0
a
1.2
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.2.7: Envelope delay of the series peaking circuit for the same characteristic values of 7
as before: a) MFA; b) MFED; c) CD. Note the MFED plot being flat up to nearly !5 =h .
"
e5" > sin =" > )
ksin )k
(2.2.36)
where ) is the pole angle in radians, ) arctan a =" 5" b 1 (read the following Note!).
Note: We are often forced to calculate some of the circuit parameters from the
trigonometric relations between the real and imaginary components of the pole. The
Cartesian coordinates of the pole =" in the Laplace plane are 5" on the real axis and ="
on the imaginary axis. In polar coordinates the pole is expressed by its modulus (the
distance of the pole from the origin of the complex plane):
Q 5" 4 =" 5" 4 =" 5"# ="#
and its argument (angle) ), defined so that:
="
tan )
5"
Now, a mathematically correct definition of the positive-valued angle is counterclockwise from the positive real axis; so if 5" is negative, ) will be greater than 1# .
However, the tanget function is defined within the range of 1# and then repeats for
values between 1 5 1# . Therefore, by taking the arctangent, ) arctana=" 5" b, we
loose the information about which half of the complex plane the pole actually lies and
consequently a sign can be wrong. This is bad, because the left (negative) side of the
real axis is associated with energy dissipative, that is, resistive circuit action, while the
right (positive) side is associated with energy generative action. This is why
-2.22-
P. Stari, E. Margan
unconditionally stable circuits have the poles always in the left half of the complex
plane.
To keep our analytical expressions simple we will keep tracking the pole layout
and correct the sign and value of the arctan by adding 1 radians to the angle )
wherever necessary. But in order to avoid any confusion our computer algorithm should
use a different form of equation (see Part 6).
See Appendix 2.3 for more details.
To use the normalized values of poles in Eq. 2.2.36 we must also enter the
normalized time, >X , where X is the system time constant, X VG . Thus we obtain:
a) for Butterworth poles (MFA):
ga > " # e >X sin >X !()& 1
(2.2.37)
(2.2.38)
c) for Critical damping (CD) we have a double real pole at =" , so Eq. 2.2.36 is not
valid here, because it was derived for simple poles. To calculate the step response for the
function with a double pole, we start with Eq. 2.2.8, insert the same (real!) value =" =#
and multiply it by the unit step operator "=. The resulting equation:
K=
=#"
= = =" #
="# e=>
= = =" #
(2.2.39)
(2.2.40)
=#" e=>
res! lim =
"
= = =" #
=p!
For res" we must use Eq. 1.11.12 in Part 1:
.
=#" e=>
= >
res" lim
= =" #
(2.2.41)
Eq. 2.2.39 has a double real pole =" 5" #VG or, normalized, 5"n #.
We insert this in the Eq. 2.2.41 to obtain the CD step response plot (curve c, 7 !#&):
gc > " e#>X " #>X
(2.2.42)
The step-response plots of all three cases are shown in Fig. 2.2.8. Also shown is the
non-peaking response as the reference (P !). The MFA overshoot is $ %$ %, while
for the MFED case it is 10 times smaller!
-2.23-
P. Stari, E. Margan
1.2
a
b
1.0
c
L=0
ii R
0.8
ii
0.6
L = m R 2C
o
C
0.4
a ) m = 0.50
b ) m = 0.33
c ) m = 0.25
c ba
T = 1/ h
h = 1 / RC
0.2
0.0
t /T
Fig. 2.2.8: Step-response of the series peaking circuit for the four characteristic values of 7:
a) MFA; b) MFED; c) CD. The case for 7 ! (P ! is the reference. The MFA overshoot
is $ %$ %, whilst for MFED it is only $ !%$ %.
7r
7R
(2.2.43)
The values for the bandwidth improvement (b and for the rise time improvement (r
are similar, but in general they are not equal. In practice we more often use (b , the
calculation of which is easier. If the step response overshoot is not too large ($ # %) we
can approximate the rise time starting from the formula for the cut off frequency:
=h # 1 0 h
"
VG
and furthermore
0h
"
#1VG
where =h is the upper half power frequency in [rads], whilst 0h is the upper half-power
frequency in Hz. We have already calculated the non-peaking risetime 7r by Eq. 2.2.4 and
found it to be ##! VG . From this we obtain 7r 0h ##!#1 !$&, and this relation we
meet very frequently in practice:
-2.24-
P. Stari, E. Margan
7r
!$&
0h
(2.2.44)
By replacing 0h with 0H in this equation, we obtain (an estimate of) the rise time of
the peaking amplifier. But note that by doing so, we miss that Eq. 2.2.44 is exact only for
the single-pole amplifier, where the load is the parallel VG network. For all other cases, it
can be used as an approximation only if the overshoot $ 2 %. The overshoot of a
Butterworth two-pole network amounts to 4.3 % and it becomes larger with each additional
pole(-pair), thus calculating the rise time by Eq. 2.2.43 will result in an excessive error.
Even greater error will result for networks with Chebyshev and Cauer (elliptic) system
poles. In such cases we must compute the actual system step response and find the risetime
from it. For Bessel poles, the error is tolerable since the =h -normalized Bessel frequency
response closely follows the first-order response upto =h . Even so, by using a computer to
obtain the rise time from the step response yields more accurate results.
2.2.10 Input Impedance
We shall use the series peaking network also as an addition to T-coil peaking. This
is possible since the T-coil network has a constant input impedance (the T-coil is discussed
in Sec. 2.4, 2.5 and 2.6). Therefore it is useful to know the input impedance of the series
peaking network. From Fig. 2.2.1 it is evident that the input impedance is a capacitor G in
parallel with the serially connected P and V :
^i
"
4=P V
4=G "a4=P V b
" =# PG 4=VG
(2.2.45)
4=
=h
(2.2.46)
" 7
4=
=
=h
=h
By making the denominator real and carrying out some further rearrangement we obtain:
#
"
^i V
=
4=
#
7 " 7
=h
=h
#
=
=
#
" " #7
7
=h
=h
(2.2.47)
: arctan
=
=
ee^i f
#
arctan
7 " 7
=h
=h
d e^i f
-2.25-
(2.2.48)
P. Stari, E. Margan
k^i k
^i
^i
d
e
V
V
V
#
# #
=
=
#
"
7
"
7
=h
=h
4
#
=
=
#
" " # 7
7
=h
=h
(2.2.49)
In Fig. 2.2.9 the plots of Eq. 2.2.49 and Eq. 2.2.48 for the same values of 7 as
before are shown:
1.0
0.7
0.5
Ii
| Z i|
R
L = m R 2C
Zi
L=0
Vo
a ) m = 0.50
b ) m = 0.33
c ) m = 0.25
0.2
h = 1 / RC
0.1
0
a
c
30
[ ]
60
L=0
90
b
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.2.9: Input impedance modulus (normalized) and the associated phase angle of the
series peaking circuit for the characteristic values of 7. Note that for high frequencies the
input impedance approaches that of the capacitance. a) MFA; b) MFED; c) CD.
Table 2.2.1 shows the design parameters of the two-pole series peaking circuit:
Table 2.2.1
response 7
(b
(r
$ %
MFA
MFED
CD
Table 2.2.1: 2nd -order series peaking circuit parameters summarized: 7 is the
inductance proportionality factor; (b is the bandwidth improvement; (r is the
risetime improvement; and $ is the step response overshoot.
-2.26-
P. Stari, E. Margan
ii
Ci
We shall calculate the network transfer function from the input admittance:
]i 4= Gi
"
V
"
"
4= G
4= P
(2.3.1)
"
V" =# PG
]i
" 4= Gi V" =# PG 4= G V
(2.3.2)
(2.3.3)
Zo + Zi + Mi ^i
(2.3.4)
"
4=G
"
4=P
4=G
"
" =# PG
(2.3.5)
If we insert Eq. 2.3.2 and Eq. 2.3.5 into Eq. 2.3.4, we obtain:
Zo Mi
V
" 4=VG Gi =# PG 4=$ Gi GPV
-2.27-
(2.3.6)
P. Stari, E. Margan
Since Mi V is the voltage at zero frequency, we can obtain the amplitude-normalized transfer
function by dividing Eq. 2.3.6 by Mi V:
J =
"
" 4=VG Gi =# PG 4 =$ Gi GVP
(2.3.7)
G
G Gi
=h
"
VG Gi
(2.3.8)
where =h is the upper half power frequency of the non-peaking case (P !). With these
substitutions we obtain the function which is normalized both in amplitude and in
frequency (to the non-peaking system cut off):
J =
"
#
=
=
=
"4
78
4 7 8 " 8
=h
=h
=h
(2.3.9)
Since the denominator is a 3rd -order polynomial we have three poles, one of which
must be real and the remaining two should be complex conjugated (readers less
experienced in mathematics can find the general solutions for polynomials of 1st -, 2nd -, 3rd and 4th -order in Appendix 2.1 ). Here we shall show how to calculate the required
parameters in an easier way. The magnitude is:
"
J =
d eJ =f eeJ =f
(2.3.10)
By rearranging the real and imaginary parts in Eq. 2.3.9 and inserting them into
Eq. 2.3.10, we obtain:
"
J =
#
#
$ #
=
=
=
a
b
"
7
8
7
8
"
8
=
=h
=h
(2.3.11)
J a;b
"
" a" # 7 8b;# 7 8 c7 8 #a" 8bd;% 7# 8# a" 8b# ;'
(2.3.12)
where we have used ; ==h in order to be able to write the equation on a single line.
2.3.1 Butterworth Poles (MFA)
The magnitude of the normalized frequency response for a three-pole Butterworth
function is:
"
J =
(2.3.13)
'
=
"
=h
-2.28-
P. Stari, E. Margan
By comparing Eq. 2.3.13 with Eq. 2.3.12 we realize that the factors at ==h # and
at ==h % in Eq. 2.3.12 must be zero if we want the function to correspond to Butterworth
poles:
"#78 !
and
7 8 #" 8 !
7 #$
and
8 $%
(2.3.14)
With these data we can calculate the actual values of Butterworth poles and the
upper half power frequency. By inserting 7 and 8 into Eq. 2.3.12 and, considering that
now the coefficients at ==h # and at ==h 4 are zero, we obtain the frequency response;
its plot is shown in Fig. 2.3.2 as curve a.
To calculate the poles we insert the values for 7 and 8 into Eq. 2.3.9 and by
inserting = instead of 4==h , the denominator of Eq. 2.3.9 gets the form:
W !"#& =$ !& =# = "
(2.3.15)
To obtain the canonical form we divide this equation by 0.125. Then to find the roots we
equate it to zero:
(2.3.16)
=$ % =# ) = ) !
The roots of this function are the normalized poles of the function J =:
="n , =#n 5"n 4 ="n " 4 "($#"
=$n 5$n #
(2.3.17)
(2.3.18)
The canonical form of the denominator of Eq. 2.3.9, with = instead of 4==h , is:
W =$
=#
=
"
"8
7 8 " 8
7 8 " 8
-2.29-
(2.3.19)
P. Stari, E. Margan
The functions in Eq. 2.3.18 and Eq. 2.3.19 must be the same. This is only possible if the
corresponding coefficients are equal. Thus we may write the following two equations:
"
'
"8
"
"&
7 8 " 8
and
(2.3.20)
8 !)$$
and
(2.3.21)
The roots of Eq. 2.3.18 (or Eq. 2.3.19, with the above values for 7 and 8) are the Bessel
poles of the function J =:
="n,#n 5"n 4 ="n ")$)* 4 "(&%%
=$n 5$n #$###
(2.3.22)
Note that the same values are obtained from the pole tables (or by running the
BESTAP routine in Part 6); in general, for Bessel poles, =5 n =5 t .
With these poles the frequency response, according to Eq. 2.3.11, results in the
curve b in Fig. 2.3.2. The Bessel poles are derived from the condition that the transfer
function has a unit envelope delay at the origin, so there is no simple way of relating it to
the upper half power frequency =H . We need to calculate lJ =l numerically for a range,
say, " ==h $, using either Eq. 2.3.11 or the FREQW algorithm in Part 6, and find =H
from it. The bandwidth improvement factor for Bessel poles is given in Table 2.3.1 at the
end of this section.
2.0
Vo
Ii R
1.0
b
0.7
L = m R 2C
Ii
0.5
Ci
a ) m = 0.67
b ) m = 0.48
c ) m = 0.67
0.2
0.1
Vo
L=0
C / Ci = 3
C / Ci = 5
C / Ci = 2
h = 1 / R (C + C i )
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.3.2: Frequency response of the third-order series peaking circuit for different values of
7. The correct setting for the required pole pattern is achieved by the input to output
capacitance ratio, GG3 . Fair circuit performance comparison is met by normalization to the
total capacitance G G3 . Here we have: a) MFA; b) MFED; c) SPEC, and the non-peaking
(P !) case as a reference. Although being of highest bandwidth, the SPEC case is nonoptimal, owing to the slight but notable dip in the range 0.5 ==h "# .
-2.30-
P. Stari, E. Margan
(2.3.23)
(2.3.24)
The corresponding frequency response is the curve c in Fig. 2.3.2. This gives a
bandwidth improvement (b ##), which sounds very fine if there were not a small dip in
the range !& < ==h < "#. So we regretably realize that the ratio G Gi can not be
chosen at random. The aberrations are even greater for the envelope delay and the step
response, as we shall see later.
2.3.4 Phase Response
For the calculation of phase response we can use Eq. 2.2.31, but we must also add
the influence of the real pole 5$n :
=
=
=
= "n
="n
=h
=h
=h
: arctan
arctan
arctan
5"n
5"n
5$n
(2.3.25)
In Fig. 2.3.3 we have plotted the phase response for different values of parameters
7 and 8. Instead of the parameter 8, the ratio G Gi is given.
0
30
L=0
60
[ ]
90
120
L = m R 2C
Ii
150
Ci
Vo
180
210
240
270
0.1
a ) m = 0.67
b ) m = 0.48
c ) m = 0.67
C / Ci = 3
C / Ci = 5
C / Ci = 2
h = 1 / R (C + C i )
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.3.3: Phase response of the third-order series peaking circuit for different values of 7:
a) MFA; b) MFED; c) SPEC; the non-peaking (P !) case is the reference.
-2.31-
P. Stari, E. Margan
2.3.5. Envelope-delay
We apply Eq. 2.2.35 to which we add the influence of the real pole 5$n :
5"n
7/ =h
5"#n
=
="n
=h
5"n
5"#n
=
="n
=h
5$ n
5$#n
=
=h
(2.3.26)
In Fig. 2.3.4 the corresponding plots for different values of parameters 7 and 8 are
shown; instead of 8 the ratio GGi is given.
0
C / Ci = 3
C / Ci = 5
C / Ci = 2
a ) m = 0.67
b ) m = 0.48
c ) m = 0.67
0.4
L=0
h = 1 / R (C + C i )
0.8
e h
1.2
1.6
2.0
Ci
0.1
L = m R 2C
Ii
Vo
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.3.4: Envelope delay of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking (P !) case is the reference. Note
the MFED flatness extending beyond =h .
=" = # = $
= =" = =# = =$
(2.3.27)
Since we apply a unit step to the network input, the above expression must be multiplied
by "= to obtain a new, fourth-order function:
K=
=" = # = $
= = =" = =# = =$
-2.32-
(2.3.28)
P. Stari, E. Margan
(2.3.29)
3!
Since the calculation of a three-pole network step response is lengthy, we give here only
the final result. The curious reader can find the full derivation in Appendix 2.3.
g> "
where:
5$
5# =#" 5$ >
A# =#" B# e5" > sin=" > " "
e
=" C
C
(2.3.30)
B # 5" 5$
C 5" 5$ # =#"
(2.3.31)
Note that we have written " for the initial phase angle of the resonance function,
instead of the usual ), in order to emphasize the difference between the response phase and
the angle of the complex conjugated pole pair (in two-pole circuits they have the same
value). We enter the normalized poles from Eq. 2.3.17, 2.3.22, and 2.3.24, and the
normalized time >VGi G >X , obtaining the step responses (ploted in Fig. 2.3.5):
a) For Butterworth poles, where 7 !''( and 8 !(&! " 1 rad:
ga > " ""&& e>X sin"($# >X 1 e# >X
(2.3.32)
b) For Bessel poles, where 7 !%)! and 8 !)$$ " #&*(! rad:
gb > " ")$* e")$* >X sin"(&% >X #&*( "*&" e#$## >X
(2.3.33)
(2.3.34)
c
a
1.0
iiR
0.8
L=0
0.6
L = m R 2C
ii
0.4
Ci
0.2
0.0
C / Ci = 3
C / Ci = 5
C / Ci = 2
a ) m = 0.67
b ) m = 0.48
c ) m = 0.67
h = 1 / R (C + C i )
T= 1/ h
0
t /T
Fig. 2.3.5: Step response of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking P !) case is the reference. The
overshoot of both MFA and SPEC case is too large to be suitable for pulse amplification.
-2.33-
P. Stari, E. Margan
The pole patterns for the three response types discussed are shown in Fig. 2.3.6.
Note the three different second-order curves fitting each pole pattern: a (large) horizontal
ellipse for MFED, a circle for MFA, and a vertical ellipse for the SPEC case.
s 1S
MFA:
s 1B = 1.0000 + j 1.7321
s 2B = 1.0000 j 1.7321
s 3B = 2.0000
s 1T
MFED:
s 1T = 1.8389 + j 1.7544
s 2T = 1.8389 j 1.7544
s 3T = 2.3222
4
T
s 3T
3
T
s 3B
s 1B
s 3S
2
T
1
T
s 2T
s 2B
SPEC:
s 1S = 0.7500 + j 1.9848
s 2S = 0.7500 j 1.9848
s 3S = 1.5000
T= R (C +Ci )
s 2S
Fig. 2.3.6: Pole patterns of the 3-pole series peaking circuit for the MFA, the MFED, and the
SPEC case. The curves on which the poles lie are: a circle with the center at the origin for MFA;
an ellipse with both foci on the real axis (the nearer at the origin) for the MFED; and an ellipse
with both foci on the imaginary axis for the SPEC case (which is effectively a Chebyshev-type
pole pattern). Also shown are the characteristic circles of each complex conjugate pole pair.
Table 2.3.1 resumes the parameters for the three versions of the 3-pole series
peaking circuit. Note the high overshoot values for the MFA and the SPEC case, both
unacceptable for a pulse amplifier.
Table 2.3.1
response
a) MFA
b) MFED
c) SPEC
7
0.667
0.480
0.667
8
0.750
0.833
0.667
(b
2.00
1.76
2.28
(r
2.27
1.79
2.33
$%
8.1
0.7
10.2
-2.34-
P. Stari, E. Margan
Cb
ii
ii
Vi
Lb
La
A
C
I1
LM
o
I2
Ii
a)
I3
b)
c)
Fig. 2.4.1: a) The basic T-coil circuit: the voltage output is taken from the center tap node of the
inductance P and its two parts are magnetically coupled by the factor ! 5 "; b) an equivalent
circuit, with no magnetic coupling between the coils it has been replaced by the mutual
inductance PM ; c) a simplified generalized impedance circuit, excited by the current generator M3 ,
showing the current loops.
x
L1
k
L2
La
Lb
LM
L1 L2
LM = k
L = L 1 + L 2 + 2 LM
a)
k=0
L 1 = L a LM
L 2 = L b LM
b)
Fig. 2.4.2: Modeling the coupling factor: a) The T-coil coupling factor 5 between the two halves
P" and P# of the total inductance P can be represented by b) an equivalent circuit, having two
separate (non-coupled) inductances, in which the magnetic coupling is modeled by the mutual
inductance PM (negative in value), so that P" Pa PM and P# Pb PM .
1
Networks with tapped coils were already being used for amplifier peaking in 1954 by F.A. Muller [2.16],
but since the bridging capacitance Cb is not shown in that article, the networks described do not have a
constant input impedance as do have the T-coil networks discussed in this and the following three sections.
-2.35-
P. Stari, E. Margan
If the output is taken from the loading resistor V , the network in Fig. 2.4.1a
behaves as an all pass network. However, for peaking purposes we take the output
voltage from the capacitor G and in this application the circuit is a low pass filter.
The equivalent network in Fig. 2.4.1b needs to be explained. We will do this with
the aid of Fig. 2.4.2. The original network has a center tapped coil whose inductance P can
be calculated by the same general equation for two coupled coils, [Ref. 2.18, 2.28]:
P P" P # # P M
(2.4.1)
where P" and P# are the inductances of the respective coil parts (which, in general, need
not to be equal) and PM is their mutual inductance. PM is taken twice, since the magnetic
induction from P" to P# is equal to the induction from P# to P" and both contribute to the
total. If 5 is the factor of magnetic coupling between P" and P# the mutual inductance is:
PM 5 P " P #
(2.4.2)
P b P# P M
(2.4.3)
P# P b P M
(2.4.4)
Note the negative sign of PM , which is a consequence of magnetic coupling; owing to this
the driving impedance at the center tap as seen by G is lower than without the coupling. In
the symmetrical case, when P" P# , we can calculate the value of P" and P# from the
required coupling 5 and total inductance P:
P" P#
P
# " 5
(2.4.5)
Thus we have proved that the circuits in Fig. 2.4.1a and 2.4.1b are equivalent, even though
no coupling exists between the coils in the circuit of Fig. 2.4.1b.
The corresponding generalized impedance model of the T-coil circuit is shown in
Fig. 2.4.1c, where the input voltage Zi is equal to the product of the input current and the
circuit impedance, Mi ^i . The input current splits into M" and M# . The current M$ flows in the
remaining loop. The impedances in the branches are:
A "= Gb
B = Pa
C = Pb
D = PM "= G
EV
(2.4.6)
-2.36-
P. Stari, E. Margan
We form a system of equations in accordance with the current loops in Fig. 2.4.1c:
Zi M" B D M# D M$ B
! M" D M# C D E M$ C
! M" B M# C M$ A B C
(2.4.7)
BD
J D
B
D
CDE
C
B
C
ABC
(2.4.8)
(2.4.9)
After multiplication some terms will cancel. Thus the solution is simplified to:
J BCA BDA BEA BEC DCA DEA DEB DEC
(2.4.10)
For further calculation we shall need both cofactors J"" and J"# . The cofactor for M" is:
J""
Zi
D
CDE
C
B
C
ABC
Zi CA CB DA DB DC EA EB EC
(2.4.11)
BD
D
B
Zi
0
0
D
C
ABC
Zi (DA DB DC BC )
(2.4.12)
Let us first find the input admittance, which we would like to be equal to "V "E.
J""
M"
Zi
Zi J
CA CB DA DB DC EA EB EC
"
(2.4.13)
After eliminating the fractions and canceling some terms, we obtain the expression:
BCA BDA BEA DCA ECA E # A E # B E # C !
(2.4.14)
Now we put in the values from Eq. 2.4.6, considering that Pa Pb , perform all the
multiplications, and arrange the terms with the decreasing powers of =. We obtain:
-2.37-
P. Stari, E. Margan
P#a
P PM
"
P
V#
V # P
!
Gb
Gb
=
G Gb
Gb
= K" =" K# !
(2.4.15)
(2.4.16)
This expression tells us that the input admittance can indeed be made equal to "V , as we
wanted in Eq. 2.4.13. For a constant input admittance circuit, Eq. 2.4.16 must be valid for
any = [Ref. 2.21]. This is possible only if both K" and K# are zero (Ross method):
K"
P#a
P PM
V# P !
Gb
Gb
(2.4.17)
K#
P
V#
!
G Gb
Gb
(2.4.18)
Zo
"
M" M #
M"
=G
M"
(2.4.20)
M"
J""
J
and
M#
J"#
J
(2.4.21)
M"
=G
J""
(2.4.22)
Again we make use of the common expressions in Eq. 2.4.6. The difference of both
cofactors is:
J"" J"# Zi (CA EA EB EC)
With these expressions, the transimpedance is:
-2.38-
(2.4.23)
P. Stari, E. Margan
CA EA EB EC
Zo
"
CA CB DA DB DC EA EB EC
M"
=G
(2.4.24)
The voltage Zi is a factor of both the numerator and the denominator, so it cancels out.
Now we replace the common expressions with those from Eq. 2.4.6, express Q with
Eq. 2.4.19, perform the indicated multiplication, make the long division of the polynomials,
and the result is a relatively simple expression:
J =
Zo
V
# #
M"
= V G Gb =VG # "
(2.4.25)
Although the author of this idea, Bob Ross, calculated it by hand [Ref. 2.21], we will not
follow his example because this calculation is a formidable work. With modern computer
programs (such as Mathematica [Ref. 2.34] or similar [Ref. 2.35, 2.38, 2.39, 2.40]), the
calculation takes less time than is needed to type in the data.
For those designers who want to construct a distributed amplifier using electronic
tubes or FETs (but not transistors, as we will see in Part 3!), where the resistor V is
replaced by another T-coil circuit and so forth, it is important to know the transimpedance
from the input current to the voltage ZV . The result is:
ZV
=# V # G Gb =VG# "
V # #
M"
= V G Gb =VG # "
(2.4.26)
Besides the two poles on the left side of the =-plane, =": and =#: , this equation also has two
symmetrically placed zeros on the right side of the =-plane, ="D and =#D , as shown in
Fig. 2.4.3. Since Eq. 2.4.26 has equal powers of = both in the numerator and the
denominator it is an all pass response. We shall return to this when we shall calculate the
step response.
The poles are the roots of the denominator of Eq. 2.4.25. The canonical form is:
=# =
"
"
#
!
#VGb
V G Gb
(2.4.27)
"
"
"
#
%VGb
%VGb #
V G Gb
(2.4.28)
=",2
"
"' Gb
""
%VGb
G
-2.39-
(2.4.29)
P. Stari, E. Margan
k increases
s1 z
j1
s 1p
2
RC
2
RC
s 2p
j 1
s2 z
Fig. 2.4.3: The poles (=": and =#: ) and zeros (="D and =#D ) of the all pass transimpedance
function corresponding to Eq. 2.4.26 and Fig. 2.4.1a. By changing the bridge capacitance
G, and the mutual inductance PM (by the coupling factor 5) according to Eq. 2.4.19, both
poles and both zeros travel along the circles shown.
An efficient inductive peaking circuit must have complex poles. By taking the
imaginary unit out of the square root, the terms within it exchange signs. Then the pole
angle ) can be calculated from the ratio of its imaginary to the real component, as we have
done before. From Fig. 2.2.4:
"' Gb
"
ee=" f
G
tan )
(2.4.30)
d e=" f
"
This gives a general result:
Gb G
" tan# )
"'
(2.4.31)
The Bessel pole placement is show in Fig. 2.4.4. The characteristic angle ) is measured
from the positive real axis.
j
j1
s1
j 1
s2
Fig. 2.4.4: The layout of complex conjugate poles =" and =# of a second-order
Bessel transfer function. In this case, the angle is ) "&!.
By using the pole angle, which we have calculated previously, and Eq. 2.4.31 , the
corresponding bridging capacitance can be found:
For Bessel poles:
) "&!
tan# ) "$
Gb G"#
(2.4.32)
tan# ) "
Gb G)
(2.4.33)
-2.40-
P. Stari, E. Margan
PM
V# G
'
(2.4.34)
PM
V# G
)
(2.4.35)
The general expression for the coupling factor is [Ref. 2.21, 2.28, 2.33]:
5
PM
PM
P" P#
(Pa PM ) (Pb PM )
(2.4.36)
PM
P
PM
#
G
Gb
%
V# G
G
V#
Gb
#
%
V#
G
Gb
%
G
Gb
%
(2.4.37)
If Gb is expressed by Eq. 2.4.31, we may derive a very interesting expression for the
coupling factor 5 :
$ tan# )
& tan# )
(2.4.38)
Since ) "&! for the Bessel pole pair and "$& for the Butterworth pole pair, the
corresponding coupling factor is:
for Bessel poles:
for Butterworth poles:
5 !&
(2.4.39)
5 !$$
(2.4.40)
Let us calculate the parameters 5 , PM and Gb for two additional cases. If we want
to avoid any overshoot, then both poles must be real and equal. In this case ) ")! and
the damping of the circuit is critical (CD). The expression under the root of Eq. 2.4.29 must
be zero and we obtain:
Gb
G
"'
PM
$ V# G
"'
5 !'
(2.4.41)
We are also interested in the circuit values for the limiting case in which the coupling
factor 5 and consequently the mutual inductance PM are zero. Here we calculate Gb from
Eq. 2.4.31:
G
Gb
(2.4.42)
% 5!
PM !
The next task is to calculate the poles for all four cases. We will show only the
calculation for Bessel poles; the other calculations are equal.
-2.41-
P. Stari, E. Margan
For the starting expression we use the denominator of Eq. 2.4.25 in the canonic
form, which we equate to zero:
=# =
"
"
#
!
#VGb
V GGb
(2.4.43)
Now we insert Gb G "#, which corresponds to Bessel poles; the result is:
=# =
'
"#
# # !
VG
V G
(2.4.44)
"
$ 4 $
VG
(2.4.45)
"
#4#
VG
(2.4.46)
For critical damping (CD) the imaginary part of the poles is zero, so Gb G "', as found
before. The poles are:
=",# 5"
%
VG
(2.4.47)
In the no-coupling case (5 !) the bridging capacitance Gb G% and the poles are:
"
" 4 $
VG
(2.4.48)
For all four kinds of poles, the input impedance ^i V PG and it is independent of
frequency. Now we have all the necessary data to calculate the frequency, phase, time delay
and the step response.
2.4.1 Frequency Response
We can use the amplitude- and frequency-normalized Eq. 2.2.27:
5"#n =#"n
J =
#
#
#
=
=
="n 5"#n
="n
5" n
=h
=h
By inserting the values for normalized poles, with VG " and =h "VG , we
can plot the response for each of the four types of poles, as shown in Fig. 2.4.5.
By comparing this diagram with the frequency-response plot for a simple series
peaking circuit in Fig. 2.2.3, we realize that the upper cut off frequency =H of the T-coil
circuit is exactly twice as much as it is for the two-pole series peaking circuit (by
comparing, of course, the responses for the same kind of poles). I.e., for Butterworth poles
we had ="n,2n " 4 (Eq. 2.2.16) for the series peaking circuit, whilst here we have
=1n,2n # 4 #. Thus the bandwidth improvement factor for a two pole T-coil circuit,
compared with the single pole (VG ) circuit is (b #)$ (the ratio of the absolute values of
-2.42-
P. Stari, E. Margan
poles). Similarly, for other kinds of poles, the bandwidth improvement is greater too, as
reported in Table 2.4.1 at the end of this section. Owing to this property, it is worth
considering the use of a T-coil circuit whenever possible. For the same reason we shall
discuss T-coil circuits further in detail.
2.0
Cb
Vo
Ii R
Ii
k
Vo
1.0
C
0.7
0.5
L=0
C b /C
0.2
0.1
0.1
a)
b)
c)
d)
0.33
0.5
0.6
0.0
1/8
1/12
1/16
1/4
0.2
d
a b
L = R2 C
h = 1/ RC
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.4.5: The frequency response magnitude of the T-coil, taken from the coil center tap. The
curve a) is the MFA (Butterworth) case, b) is the MFED (Bessel) case, c) is the critical damping
(CD) case and d) is the no-coupling (5 !) case. The non-peaking (P !) case is the reference.
The bandwidth extension is notably larger, not only compared with the two-pole series peaking,
but also to the three-pole series peaking circuit.
=
=
=1n
="n
=h
=h
arctan
: arctan
51n
5"n
and, by inserting the values for the normalized poles, as we did in the calculation of the
frequency response, we obtain the plots shown in Fig. 2.4.6.
2.4.3 Envelope Delay
We use again Eq. 2.2.35:
7e =h
51n
=
#
=1n
51n
=h
51n
#
51n
=
=1n
=h
and, with the pole values as before, we get the Fig. 2.4.7 responses.
-2.43-
P. Stari, E. Margan
Cb
20
40
Ii
L=0
k
Vo
60
[ ]
80
100
120
140
a)
b)
c)
d)
0.33
0.5
0.6
0.0
C b /C
1/8
1/12
1/16
1/4
160
180
0.1
0.2
L = R2 C
h = 1/ RC
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.4.6: The transfer function phase angle of the T-coil circuit, for the same values of coupling
and capacitance ratio as for the frequency response magnitude. a) is MFA, b) is MFED, c) is CD
and d) is the no-coupling case. The non-peaking (P !) case is the reference.
0.00
0.25
Cb /C
k
a)
b)
c)
d)
0.33
0.5
0.6
0.0
1/8
1/12
1/16
1/4
L = R2 C
h = 1/ RC
L=0
c
b
0.50
e h
0.75
Cb
L
Ii
Vo
1.00
1.25
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.4.7: The envelope delay of the T-coil: a) MFA, b) MFED, c) CD, d) 5 !. The T-coil
circuit delay at low frequencies is exactly one half of that in the P ! case.
-2.44-
P. Stari, E. Margan
(2.4.49)
(2.4.50)
(2.4.51)
#
e >X sin $ >X "!%(# 1
$
(2.4.52)
The plots corresponding to these four equations are shown in Fig. 2.4.8. Also shown
are the corresponding four pole patterns.
1.2
k
1.0
o
ii R
a)
b)
c)
d)
0.33
0.5
0.6
0.0
C b /C
1/8
1/12
1/16
1/4
a
b
c
L=0
0.8
0.6
s 1b
Cb
0.4
0.0
0.0
s 1c s 2c
0.2
L = R2 C
h = 1/ RC
0.5
1.0
j
h
s 1d
ii
s 1a
1.5
t / RC
s 2b
2.0
s 2a
s 2d
2.5
3.0
Fig. 2.4.8: The step response of the T-coil circuit. As before, a) is MFA, b) is MFED, c) is CD
and d) is the case 5 !. The non-peaking case (P !) is the reference. The no-coupling case
has excessive overshoot, 16.3 %, but MFA overshoot is also high, 4.3 %. Note the pole pattern
of the four cases: the closer the poles are to the imaginary axis, the greater is the overshoot. The
diameter of the circle on which the poles lie is %VG .
-2.45-
P. Stari, E. Margan
(2.4.53)
J =
=# V# G Gb =VG # "
= =$ = =%
=# V# G Gb =VG # "
= =" = =#
(2.4.54)
By multiplication with "= we obtain the corresponding formula for the step response in
the frequency domain:
K=
= =$ = =%
= = =" = =#
(2.4.55)
= =$ = =% =>
e
= = =" = =#
(2.4.56)
= =$ = =% =>
=" =$ =" =% =" >
res" lim = ="
e
e
= p ="
= = =" = =#
=" =" =#
5" 4 =" 5" 4 =" 5" 4 =" 5" 4 =" 5" 4 =" >
e
5" 4 =" 5" 4 =" 5" 4 ="
(2.4.57)
= =$ = =% =>
=# =$ =# =% =# >
res# lim = =#
e
e
= p =#
= = =" = =#
=# =# ="
5" 4 =" 5" 4 =" 5" 4 =" 5" 4 =" 5" 4 =" >
e
5" 4 =" 5" 4 =" 5" 4 ="
-2.46-
P. Stari, E. Margan
g> "
"
(2.4.58)
For critical damping (CD) both zeros and both poles are real. Then, =" =# and
=$ =% =" . There are only two residues, which are calculated in two different ways
(because the residue of the double pole must be calculated from the first derivative):
= =$ # =>
=#
res! lim =
e #$ "
#
= = ="
="
=p!
(because =$ =" )
.
= =$ # =>
= =" #
res" lim
e
= p =" .=
= = =" #
.
= =$ # =>
lim
e
= p =" .=
=
=#
(because =$ =" )
(2.4.59)
The sum of both residues is the time response sought. We insert the normalized
poles and put >VG >X to obtain:
(2.4.60)
(2.4.61)
(2.4.62)
%
e>X sin $ >X
$
(2.4.63)
All four plots are shown in Fig. 2.4.9. Note the initial transition owing to the bridge
capacitance Gb at high frequencies, the dip where the phase inversion between the high
pass and low pass section occurs, and the transition to the final value.
-2.47-
P. Stari, E. Margan
1.2
1.0
ii R
0.8
0.6
b a
0.4
Cb
0.2
ii
0.0
0.2
a)
b)
c)
d)
0.33
0.5
0.6
0.0
1/8
1/12
1/16
1/4
L = R2 C
h = 1/ RC
0.4
0.6
0.3
Cb /C
0.0
0.3
0.6
0.9
1.2
1.5
t / RC
1.8
2.1
2.4
2.7
3.0
Fig. 2.4.9: The step response of the T-coil circuit, but now with the output from the loading
resistor V (this is interesting for cascading stages, as explained later). As before, a) MFA,
b) MFED, c) CD, and d) 5 ! . The system has the characteristics of an all pass filter.
All the significant data of the T-coil peaking circuit are collected in the Table 2.4.1.
Table 2.4.1
response type
a) MFA
b) MFED
c) CD
d) 5 !
5
!$$
!&!
!'!
!!!
Gb G
")
""#
""'
"%
(b
#)$
#(#
#&(
#&%
(r
#*)
#('
#''
$#$
$ %
%$!
!%$
!!!
"'$
-2.48-
P. Stari, E. Margan
capacitances G" , G# , and G$ . This will cause a slight decrease in bandwidth, but as we
will see later the decrease introduced by T-coils will not harm the operation of the total
system in any way.
TV-Camera
Signal
Coaxial Cable
Z 0 = 75
C 3 = 25 pF
R3 = 100 k
C 1 = 27 pF
R1 = 1 M
C 2 = 22 pF
R2 = 30 k
Video Monitor
Vectorscope
Video Modulator
Fig. 2.4.10: An example of a system with different input impedances (TV studio equipment).
The signal from a color TV camera is controlled on the monitor screen, the RGB color vectors
are measured by the vectorscope, and, finally, the signal is sent to the video modulator for
broadcasting. All interconnections are made by a coaxial cable with the characteristic
impedance of 75H. With long cables, adding considerable delay, the input capacitances can
affect the highest frequencies, causing reflections.
On the basis of the data in Fig. 2.4.10 we will calculate the T-coil for each of the
three devices. In addition we will calculate the bandwidth at each input. Since the whole
system must faithfully transmit pulses, we will consider Bessel poles for all three T-coils.
We use the following four relations:
P V# G
(Eq. 2.4.19)
Gb G"#
(Eq. 2.4.32)
5 !&
(Eq. 2.4.37)
(b #(#
(Table 2.4.1)
0H (b 0h
(b =h
(b
#1
# 1 VG
So we calculate:
a) for the monitor,
P" "&# nH,
0h $ )%* MHz,
The bandwidths are far above the requirement of the system, which is about 6 MHz
for either a color or a black and white signal. Fig. 2.4.11 shows the schematic diagram in
which the calculated component values are implemented.
-2.49-
P. Stari, E. Margan
TV Camera
Coax
Signal
k
L1 1
75
Source
Z o = 75
C b3
C b2
k
L2 2
Coax
75
75
R1
R1
C1
k
L3 3
Coax
R2
Video Monitor
C2
Vectorscope
R3
C3
RL
75
Video Modulator
Fig. 2.4.11: Input impedance compensation by T-coil sections prevents signal reflections.
Each section of the coaxial cable sees the terminating 75 H resistor at the end of the chain.
The bandwidth is affected only slightly. The circuit values are shown in the text above.
Since a properly designed T-coil circuit has a constant input impedance, it may be
used in connection with a series peaking circuit in order to improve further the system
bandwidth, as we shall see in Sec. 2.6. But first we shall examine a 3-pole T-coil system.
-2.50-
P. Stari, E. Margan
ii
k
R
Ci
s3
R
s2
D1
s3
s1
Cb
D2
s1 = 1.839 + j 1.754
s2 = 1.839 j 1.754
s3 = 2.322
s1 , s 2
If properly designed, the basic two-pole T-coil circuit will have a constant input
impedance V , independent of frequency. This property allows a great simplification of the
three-pole network analysis. To both poles of the T-coil, =" and =# , we only need to add the
third input pole =$ "VGi . In order to design an efficient peaking circuit, the tap of
the coil must feed a greater capacitance, so Gi < G , because the T-coil has no influence on
the input pole =$ . Since the network is reciprocal (the current input and voltage output
nodes can be exchanged without afecting the response) we can always fulfil this
requirement. Also, because of the constant input impedance we can obtain the expression
for the transfer function from that of a two-pole T-coil circuit (Eq. 2.4.25) by adding to it
the influence of the third pole =$ (resulting in a simple multiplication of the first-order and
second-order transfer function):
J =
Zo
M" V
"
"
VG
# #
"
=
= V G Gb =
VGi
#
(2.5.1)
-2.51-
P. Stari, E. Margan
From the analysis of a two-pole T-coil circuit we remember that the diameter of the
circle on which both poles =" and =# lie is H" %VG (see Fig. 2.4.3 or 2.4.8). The
diameter H# of the circle which goes through the real pole =$ 5# is simply "VGi (the
reason why we have drawn the circle through this pole also, will become obvious later,
when we shall analyze the four-pole L+T circuits). We introduce a new parameter:
G
Gi
(2.5.2)
The ratio of the diameters of these circles going through the poles and the origin is then:
H#
H"
"
VGi
%
VG
8
VG
%
VG
8
%
(2.5.3)
(2.5.4)
=
4
RC c
os
D1
1 = 4 sin 1 cos 1
RC
s1
1 = 4 cos 2 1
RC
D1 = 4
RC
M1 =
12 + 12
1 = arctan 1
1
Fig. 2.5.3: The basic trigonometric relations of the main parameters for one of
the poles of the T-coil circuit. Knowing one pair of parameters, it is possible to
calculate the rest by these simple relations.
Fig. 2.5.3 illustrates some basic trigonometric relations between the polar and
Cartesian expression of the poles by taking into account the similarity relationship between
the two right angle triangles: ! 5" =" and ! =" H" :
%
cos# )"
VG
(2.5.5)
-2.52-
%
cos )" sin )"
VG
(2.5.6)
P. Stari, E. Margan
From these equations we can calculate the coupling factor 5 and the bridging
capacitance Gb . Since:
="
e="
tan )"
(2.5.7)
5"
d ="
$ tan# )"
& tan# )"
Next we must calculate the parameter 8 from the table of poles in Part 4. For
Butterworth poles, listed in Table 4.3.1, the values for order 8 $ are:
="t,#t 5"t 4 ="t !&!!! 4 !)''!
=$t 5$t "!!!!
(2.5.8)
)" "#!
From Eq. 2.5.5 it follows that:
H"
5"t
cos# )"
(2.5.9)
H#
" cos# "#!
5$t cos# )"
!&!!!
H"
!&!!!
5"t
(2.5.10)
Gi G#
(2.5.11)
Returning to the equations for 5 and Gb we find 5 ! (no coupling !!!) and
Gb !#& G . Just as it was for a two-pole T-coil circuit, here, too, P V# G . So we have
all the circuit parameters for the Butterworth poles.
We can take the values for Bessel poles for order 8 $ either from Table 4.4.3 in
Part 4 , or by running the BESTAP routine (Part 6):
="t,#t 5"t 4 ="t ")$)* 4 "(&%%
=$t 5$t #$##"
(2.5.12)
)" "$'$&
In a similar way as before, we obtain:
5 !$&$'
Gb !"# G
Gi !$) G
(2.5.13)
=h
"
"
VG Gi
VGc
-2.53-
(2.5.14)
P. Stari, E. Margan
This is important, because if the coil is replaced by a short circuit both capacitances
appear in parallel with the loading resistor V . Since Gi G8, we may express both
capacitances with the total capacitance Gc G Gi and obtain:
G Gc
8
8"
Gi Gc
and
"
8"
(2.5.15)
So far we have used the pole data from tables, since we needed only the ratios of
these poles. But, to calculate the frequency, phase, envelope delay, and step response we
shall need the actual values of the poles. We have calculated the poles of the T-coil circuit
by Eq. 2.4.29, which we repeat here for convenience:
=",#
"
"' Gb
""
%VGb
G
We shall use the Butterworth poles to explain the procedure. For these poles we
have Gb G% and 8 #. By inserting these values in the above formula we obtain:
=",#
"
" 4 $
VG
(2.5.16)
"
8"
"
$
" 4 $
" 4 $
VGc
8
VGc
#
"
"& 4 #&*)"
VGc
(2.5.17)
"
"
$
8 "
VGi
VGc
VGc
(2.5.18)
=$
In a similar way we also calculate the values for Bessel poles and obtain:
= ", #
"
#))'! 4 #(&$#
VGc
and
=$
$'%%(
VGc
(2.5.19)
When calculating the values for the critical damping case (CD) we must consider
that the imaginary values of the poles =" and =# must be zero. This gives Gb G"'. Here
we may choose 8 #, and this means that Gi G #. The corresponding poles, which are
all real, are:
=",#
'
VGc
and
=$
$
VGc
(2.5.20)
-2.54-
P. Stari, E. Margan
normalized poles (VGc ") and the normalized frequency =/=h . Thus we obtain the
following expression:
a5"#n =#"n b 5$
J a=b
#
#
#
#
=
=
=
5"n
="n 5"#n
="n 5$#n
=h
=h
=h
(2.5.21)
The plot for all three types of poles is shown in Fig. 2.5.4. By comparing the curve
a, MFA, where 5 ! , with the curve a in Fig. 2.4.5, where also 5 ! , we realize that we
have achieved a bandwidth extension just by splitting the total circuit capacitance into the
input capacitance Gi and the coil loading capacitance G .
2.0
Vo
Ii R
1.0
c
0.7
0.5
Cb
L
Ii
Vo
Ci
0.2
0.1
0.1
a)
b)
c)
k
0.00
0.35
0.60
L = R2 C
h = 1/ R ( C + C i )
L=0
Cb /C
0.25
0.12
0.06
0.2
C /C i
2.00
2.63
2.00
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.5.4: Three-pole T-coil network frequency response: a) MFA; b) MFED; c) CD case. The
non-peaking response (P !) is the reference. The MFA bandwidth is larger then that of the twopole circuit in Fig.2.4.5; in contrast, MFED bandwidth is nearly the same, but the circuit can be
realized more easily, owing to the lower magnetic coupling factor required. Note also that, owing
to the possibility of separating the total capacitance into a driving and loading part, the reference
non-peaking cut off frequency =h must be defined as "VG Gi .
=
=
=
="n
="n
=h
=h
=h
arctan
arctan
: arctan
5"n
5"n
5$n
-2.55-
P. Stari, E. Margan
0
30
60
L=0
Cb
90
[ ]
120
Ii
L = R2 C
h = 1/ R ( C + C i )
Vo
150
Ci
180
210
240
Cb /C
0.25
0.12
0.06
k
0.00
0.35
0.60
a)
b)
c)
270
0.1
0.2
C /C i
2.00
2.63
2.00
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.5.5: Three-pole T-coil network phase response: a) MFA; b) MFED; c) CD case. Note that
at high frequencies the 3-pole system phase asymptote is #(! (3 90).
Cb
L
Ii
0.2
Vo
Ci
e h
0.4
a)
b)
c)
0.6
k
0.00
0.35
0.60
L = R2 C
h = 1/ R ( C + C i )
L=0
C b /C
0.25
0.12
0.06
C /C i
2.00
2.63
2.00
c
b
a
0.8
1.0
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.5.6: Three-pole T-coil network envelope delay: a) MFA; b) MFED; c) CD case.
Note that the MFED flatness now extends to nearly "& =h .
-2.56-
P. Stari, E. Margan
(2.5.22)
J =
=#" =$
= =" # = =$
(2.5.24)
(where =" 5" and =$ 5$ ) must be multiplied by the unit step operator "= to obtain the
form appropriate for _" transform:
K=
=#" =$
= = =" # = =$
(2.5.25)
and the step response is the inverse Laplace transform of K=, which in turn is equal to the
sum of its residues:
g> _" K= ! res
=#" =$ e=>
= = =" # = =$
(2.5.26)
.
=#" =$ e=>
.
="# =$ e=>
= =" #
res" lim
lim
= p=" .=
= p=" .= = = =$
== =" # = =$
lim =#" =$
= p="
=#" =$
=#" =$ e=>
="#
res# lim = =$
e=$ >
= p=$
= = =" # = =$
=$ =" #
The sum of all three residues is the sought step response:
-2.57-
(2.5.27)
P. Stari, E. Margan
gc > " =$
(2.5.28)
(2.5.29)
Finally, we normalize the poles (Eq. 2.5.20, 5"n ' and 5$n $ and
normalize the time as >X , where X VGi G, to obtaint the formula by which the plot
c in Fig. 2.5.7 is calculated:
gc > " $ " # >X e'>X % e$>X
(2.5.30)
1.2
a
b
1.0
ii R
0.8
L=0
L = R2 C
h = 1/ R ( C + C i )
0.6
0.4
a)
b)
c)
0.2
0.0
0.0
0.5
1.0
C b /C
0.25
0.12
0.06
C /C i
2.00
2.63
2.00
1.5
t /R (C + Ci )
2.0
k
0.00
0.35
0.60
Cb
ii
k
o
Ci
2.5
3.0
Fig. 2.5.7: The step response of the three-pole T-coil circuit: a) MFA; b) MFED; c) CD.
The non-peaking case (P !) is the reference. Since the total capacitance G Gi is equal
to G of the two-pole T-coil circuit, the MFED rise time is also nearly identical. However,
the three-pole circuit is much easier to realize in practice, owing to the lower 5 required .
-2.58-
P. Stari, E. Margan
responses, making three plots each, with a different ratio Gi G 8 . In the first group we
Group 1: 5 !$$ , Gb G )
a) 8 #&
="n,#n #) 4 #)
=$n $&
b) 8 #
="n,#n $ 4 $
=$n $
c) 8 "&
The poles are selected so that the sum of G Gi is the same for all three cases. In
this way we have the same upper half power frequency =h for any set of poles. This is
necessary in order to have the same scale for all three plots. For the above poles we obtain
the frequency response as in Fig. 2.5.8 and the step response as in Fig. 2.5.9.
Group 2: 5 !# , Gb !"( G
a) 8 #
="n,#n #& 4 #*!)
=$n $
b) 8 "&
c) 8 "
="n,#n #& 4 $##( ="n,#n $ 4 $)$(
=$n #&
=$n #
The corresponding frequency response plots are displayed in Fig. 2.5.10 and the
step responses in Fig. 2.5.11. From Fig. 2.5.11 it is evident that we have decreased the
coupling factor in the second group too much. Nor is a single curve in this figure suitable
for the peaking circuit of a pulse amplifier. In curve a the overshoot is excessive, whilst the
curve c exhibits too slow a response. The curve b rounds off too soon, reaching the final
value with a much slower slope. In a plot with a coarser time scale this curve would clearly
show a missing chunk of the step response. Needless to say, it would be very annoying if
an oscilloscope amplifier were to have such a step response.
All the important data for the three-pole T-coil peaking circuits are collected in
Table 2.5.1. It is worth noting that we achieve a three-pole MFED response with the
coupling factor 5 !$& ((r #()), whilst for a two-pole T-coil MFED response the
5 !& was necessary (for a similar (r #('). If we are satisfied with a slightly smaller
bandwidth it is possible to use the parameters of Group 1, where the coupling factor is !$$
only. Such a small coupling factor is much easier to achieve than 5 !&. So for the
practical construction of a wideband amplifier we find the three-pole T-coil circuits very
convenient.
Table 2.5.1
response type
MFA
MFED
CD
Group 1, a
Group 1, b
Group 1, c
Group 2, a
Group 2, b
Group 2, c
5
!
!$&
!'!
!$$
!$$
!$$
!#
!#
!#
Gb G
!#&
!"#&
!!'$
!"#&
!"#&
!"#&
!"'(
!"'(
!"'(
GGi
#
#'%&
#
#&
#
"&
#
"&
"
(b
$!!
#(&
###
#(&
#&*
#$&
#)%
#&*
#"$
(r
#))
#()
##'
#(&
#'%
#'!
#)!
#'#
#"'
-2.59-
$ [%]
)!)
!(&
!!!
!)!
!!!
!!!
")&
!!!
!!!
P. Stari, E. Margan
2.0
Vo
Ii R
1.0
0.7
0.5
Cb
Ii
0.2
0.1
0.1
L = R2 C
h = 1/ R ( C + C i )
k
Vo
Ci
a)
b)
c)
L=0
k
1/3
1/3
1/3
Cb /C
1/8
1/8
1/8
0.2
C /C i
2.5
2.0
1.5
0.5
1.0
/ h
2.0
5.0
10.0
1.2
s 1abc
1.0
o
ii R
s 3b
s 3a s 3c
j
a
0.8
s 2abc
Cb
0.6
0.4
a)
b)
c)
0.5
1.0
k
1/3
1/3
1/3
1.5
t /R (C + C i)
Cb /C
1/8
1/8
1/8
C /C i
2.5
2.0
1.5
2.0
k
o
Ci
0.2
0.0
0.0
ii
L = R2 C
h = 1/ R ( C + C i )
2.5
3.0
Fig. 2.5.9: Low coupling factor, Group 1: step response. In all three cases the real pole =$ is
placed closer to the origin (becoming dominant) than in the MFA and MFED case, making the
responses more similar to the CD case. The characteristic circle of the complex conjugate pole
pair has a slightly different diameter in each case.
-2.60-
P. Stari, E. Margan
2.0
Vo
Ii R
1.0
0.7
Cb
0.5
Ii
Vo
Ci
0.2
0.1
0.1
L = R2 C
h = 1/ R ( C + C i )
k
R
k
0.2
0.2
0.2
a)
b)
c)
L=0
Cb /C
1/6
1/6
1/6
0.2
C /C i
2.0
1.5
1.0
0.5
1.0
/ h
2.0
5.0
10.0
1.2
s 1abc
1.0
s 3b
s 3a s 3c
ii R
b
c
0.8
L=0
s 2abc
Cb
0.6
ii
L = R2 C
h = 1/ R ( C + C i )
Ci
a)
b)
c)
0.2
0.0
0.5
1.0
k
0.2
0.2
0.2
1.5
t /R (C + Ci )
Cb /C
1/6
1/6
1/6
C /C i
2.0
1.5
1.0
2.0
k
o
0.4
0.0
2.5
3.0
Fig. 2.5.11: Low coupling factor, Group 2: step response. The pole =$a is slightly further
away from the real axis than =$b or =$c , therefore causing an overshoot larger than in the
MFED case. Both responses b and c are over-damped, reaching the final value much later
than in the MFED case.
-2.61-
P. Stari, E. Margan
Cb
ii
Li
j
1
s1
3
o
Ci
s3
s2
s4
s3 , s4
s1 , s 2
D3
D1
Here we utilize the basic property of a T-coil circuit its constant and real input
impedance, presenting the loading resistor to the input series peaking section. Since the
input capacitance Gi and the input inductance Pi form a letter L (inverted), we call the
network in Fig. 2.6.1 the L+T circuit. This is a four-pole network and its input impedance
is not constant, but it is similar to the series peaking system, which we have already
calculated (Eq. 2.2.44 2.2.48, plots in Fig. 2.2.9).
The transfer function of the L+T network is simply the product of the transfer
function for a two-pole series peaking circuit (Eq. 2.2.4) and the transfer function for a twopole T-coil circuit (Eq. 2.4.25). Explicitly:
"
V
7V# Gi#
V # GGb
J a=b
=
"
=
"
=#
=#
#
#
#
7VGi
7V Gi
VGb
V GGb
(2.6.1)
Both polynomials in the denominator are written in the canonical form. It would be useless
to multiply them, because this would make the analysis very complicated. If we replace V
in the later numerator by 1, a normalized equation results.
The T-coil section has two poles, which we can rewrite from Eq. 2.4.28:
=",# 5" 4 ="
"
"
"
#
% VGb
% VGb #
V G Gb
-2.63-
(2.6.2)
P. Stari, E. Margan
whilst the input section L has two poles, rewritten from Eq. 2.2.5:
=$,% 5$ 4=$
"
"
"
#7VGi
%7# V# Gi#
7V# Gi#
(2.6.3)
The T-coil circuit extends the bandwidth twice as much as the series peaking
circuit, so in order to extend the bandwidth of the L+T-network as much as possible, the
T-coil tap must be connected to whichever capacitance is greater. Thus Gi G . Therefore,
the circle with the diameter H" and the angle )" , corresponding to the T-coil circuit poles
=",# , are smaller than the circle with the diameter H$ and the angle )$ corresponding to the
poles =$,% of the L branch of the circuit.
Our task is to calculate the circuit parameters for the Bessel pole pattern shown in
Fig. 2.6.2, which gives an MFED response. The corresponding values for Bessel poles,
shown in Table 4.4.3 in Part 4, order 8 %, are:
="t,#t 5"t 4 ="t #)*'# 4 !)'(#
)" "'$$$
)$ "#)$(
5"#t ="#t
l="t l
cos )"
cos )"
and H$
5$#t =$#t
l=$t l
cos )$
cos )$
(2.6.4)
"($!%
5"#t ="#t cos )$
#)*'## !)'(##
cos "#)$(
H"
(2.6.5)
From Fig. 2.2.2 it is evident that the diameter of the circle, on which the poles of the series
peaking circuit lie, is #VGi . But from Fig. 2.4.3, in the case of a two-pole T-coil circuit,
the circle diameter is %VG . Furthermore it is:
G
8
Gi
H$
H"
#
VGi
%
VG
#8
VG
%
VG
8
#
Gi
or
8#
G
8
H$
# "($!% $%'!)
H"
(1.6.6)
(2.6.7)
As for the three-pole T-coil analysis, here, too, we express the upper half power frequency
of the uncompensated circuit (without coils) as a function of the total capacitance Gc :
=h
"
"
V G Gi
VGc
8
8"
and
Gi Gc
-2.64-
"
8"
P. Stari, E. Margan
Gc
"#)*!
and
Gi
Gc
%%'!)
(2.6.8)
Now we can calculate all other parameters of the L+T circuit and also the actual
values of the poles:
a) Coupling factor (Eq. 2.4.36):
5
$ tan# )"
$ tan# "'$$$
!&(")
& tan# )"
& tan# "'$$$
" tan# )$
" tan# "#)$(
!'%))
%
%
5"
%
% "#)*!
%($"(
cos# )"
cos# "'$$$
VG
V Gc
V Gc
f) Imaginary part of the pole =" , ee=" f =" , (Eq. 2.5.6 and Fig. 2.5.3):
="
%
% "#)*!
"%"'(
cos )" sin )"
cos "'$$$ sin "'$$$
VG
VGc
VGc
g) Real part of the pole =$ , de=$ f 5$ , (Eq. 2.5.5 and Fig. 2.5.3):
5$
#
# %%'!)
$%$('
cos# )$
cos# "#)$(
VGi
VGc
VGc
h) Imaginary part of the pole =$ , ee=$ f =$ , (Eq. 2.5.6 and Fig. 2.5.3):
=$
#
# %%'!)
%$%"*
cos )$ sin )$
cos "#)$( sin "#)$(
VGi
VGc
VGc
As above, we can calculate the parameters for the MFA response from normalized
VGc " values of the 4th -order Butterworth system (Table 4.3.1 , BUTTAP, Part 6):
="n,#n 5"n 4 ="n %"#"$ 4 "(!("
and
)" "&(&!
and
)$ ""#&!
The L+T network parameters for some other types of poles are given in Table 2.6.1
at the end of this section.
-2.65-
P. Stari, E. Margan
For Butterworth poles (and only for these!) it is very easy to calculate the upper half
power frequency =H : it is equal to the radius of the circle centered at the origin, on which
all four poles lie, which in turn is equal to the absolute value of any one of the four poles. If
we use the normalized pole values, the circle radius is also equal to the factor of bandwidth
improvement, (b . By dividing this value by VGc , we obtain =H . We can use any one of the
four poles, e.g. ="n :
(b
=H
l="n l 5"#n =#"n %"#"$# "(!("# %%'!*
=h
(2.6.9)
a5"#n ="n b
J a=b
#
#
=
=
#
#
5
=
5
=
n
n
"
"
"
"
n
n
=h
=h
a5$#n =$n b
#
#
#
=
=
#
5$n = =$n 5$n = =$n
h
h
(2.6.10)
Since we have inserted the normalized poles, the frequency, too, had to be
normalized as ==h . Fig. 2.6.3 shows the frequency response for a) MFA and b) MFED
and also for two other pole placements, reported in [Ref. 2.28], the data of which are:
The curve c) corresponds to the poles of Group C of [Ref. 2.28]:
="n,#n 5"n 4 ="n $$#&# 4 !&)'$ and )" "(!!!
=$n,%n 5$n 4 =$n "(!(" 4 %"#"$ and )$ ""#&!
The curve d) corresponds to the poles of Group A of [Ref. 2.28]:
="n,#n 5"n 4 ="n $)$$# 4 "()(% and )" "&&!!
s$n,%n 5$n 4 =$n #"!#% 4 &!!"$ and )$ ""#)!
Whilst c and d offer an improvement in (b and (r , their step response is far from optimum.
In Fig. 2.6.4, the plot e) is the Chebyshev response, ?: !!& [Ref. 2.24, 2.30]:
="n,#n 5"n 4 ="n $(*"# 4 ")'&' and )" "&$)!
=$n,%n 5$n 4 =$n #%)'" 4 %'(&& and )$ "")!!
-2.66-
P. Stari, E. Margan
The plot f) is the Gaussian frequency response (to "# dB) [Ref. 2.24, 2.30]:
="n,#n 5"n 4 ="n $$)$& 4 #!'%( and )" "%))$
=$n,%n 5$n 4 =$n $%"&! 4 '#&&' and )$ "")'!
The plot g) corresponds to a double pair of Bessel poles, with the following data:
="n,#n =$n,%n 5"n 4 ="n %&!!! 4 #&*)" and )" "&!!!
a
1.0
0.7
0.5
Li
Ii
Vo
Ii R
Vo
Ci
0.2
a)
b)
c)
d)
0.1
0.1
C
m
1.71
0.65
1.49
1.66
Li = 0
L = R2 C
L i = mR 2Ci
h = 1/ R ( C + C i )
Cb
c
b
L=0
R
C /C i
4.83
3.46
6.00
6.00
k
0.55
0.57
0.59
0.53
0.2
C b /C
0.073
0.068
0.064
0.067
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.6.3: Four-pole L+T peaking circuit frequency-response: a) MFA; b) MFED; c) Group C;
d) Group A. In the non-peaking reference case both inductances are zero.
1.0
0.7
0.5
Vo
Ii R
Cb
Li
Ii
0.2
e)
f)
g)
0.1
0.1
k
Vo
Ci
C
m
1.13
1.09
0.33
0.2
L = R2 C
L i = mR 2Ci
h = 1/ R ( C + C i )
k
0.54
0.49
0.50
f
g
L=0
Li = 0
C /C i
4.79
6.43
2.00
0.5
C b /C
0.076
0.085
0.083
1.0
/ h
2.0
5.0
10.0
Fig. 2.6.4: Some additional frequency response plots of the four-pole L+T peaking circuit:
e) Chebyshev with 0.05 phase ripple; f) Gaussian to 12dB; g) double 2nd -order Bessel.
Note the lower bandwidth of g) (2.9 =h ) compared with b) in Fig. 2.6.3 (3.47 =h ). This
clearly shows that the bandwidth of a cascade of identical stages is lower than if the stages
have staggered poles.
-2.67-
P. Stari, E. Margan
Note: The lower bandwidth (=H 2.9 =h ) of the system with repeated poles, plot g,
compared with the staggered pole placement Fig. 2.6.2, plot b (=H 3.47 =h ), clearly
shows that using repeated poles is not a clever idea! See also the step response plots.
All these groups of poles can be found in the tables [Ref. 2.30]. Here we can see the
extreme adaptability of the calculation method based on the trigonometric relations as
shown in Fig. 2.5.2, 2.5.3, 2.6.2, and the corresponding formulae. We call this method
geometrical synthesis. By this method, the calculation of circuit parameters for the
inductive peaking amplifier with any suitable pole placement is very easy and we will use it
extensively throughout the rest of the book.
2.6.2 Phase Response
We use Eq. 2.2.30 for each of the four poles:
=
=
="n
="n
=h
=h
: arctan
arctan
5" n
5"n
=
=
=$n
=$n
=h
=h
arctan
arctan
5$n
5$n
(2.6.11)
The phase response plots for the first four groups of the poles are shown in Fig. 2.6.5.
Although the vertical scale ends at $!!, the phase asymptote at high frequencies is
$'! for all 4th -order responses.
0
L=0
Li = 0
60
[ ]
Li
Ii
180
240
300
0.1
L = R2 C
L i = mR 2Ci
h = 1/ R ( C + C i )
Cb
120
k
Vo
Ci
a)
b)
c)
d)
C
m
1.71
0.65
1.49
1.66
0.2
k
0.55
0.57
0.59
0.53
R
C /C i
4.83
3.46
6.00
6.00
0.5
Cb /C
0.073
0.068
0.064
0.067
b
a d
1.0
/ h
2.0
5.0
10.0
Fig. 2.6.5: Four-pole L+T peaking circuit phase response: a) MFA; b) MFED; c) Group C;
d) Group A. The non-peaking case, in which both inductors are zero, has a 90 maximum phase
shift; all other cases, being of 4th -order, have a 360 maximum phase shift.
-2.68-
P. Stari, E. Margan
7e = h
5"n
=
="n
5"#n
=h
5"#n
5$n
=
=$n
=h
5$#n
5"n
=
="n
=h
5$n
5$#n
=
=$n
=h
(2.6.12)
The corresponding plots for the first four groups of poles are displayed in Fig. 2.6.6.
Note that the value of the delay at low frequency is slightly different for each pole group.
This is owed to a different normalization for each circuit.
0.0
L = R2 C
L i = mR 2Ci
h = 1/ R ( C + C i )
Cb
0.2
Li
Ii
Li = 0
L=0
Vo
Ci
0.4
e h
0.8
1.0
0.1
0.6
a)
b)
c)
d)
m
1.71
0.65
1.49
1.66
k
0.55
0.57
0.59
0.53
C /C i
4.83
3.46
6.00
6.00
0.2
0.5
Cb /C
0.073
0.068
0.064
0.067
1.0
/ h
2.0
5.0
10.0
Fig. 2.6.6: Four-pole L+T peaking circuit envelope delay: a) MFA; b) MFED; c) Group C;
d) Group A. For the non-peaking case at DC. the envelope delay is 1; all other cases have a larger
bandwidth and consequently a lower delay. Note the MFED flatness up to nearly 3.3 =h .
=" = # = $ = %
= =" = =# = =$ = =%
(2.6.13)
The _ transform of the step response is obtained by multiplying this function by the
unit step operator "=, resulting in a new, five-pole function:
K=
=" =# =$ =%
= = =" = =# = =$ = =%
-2.69-
(2.6.14)
P. Stari, E. Margan
and to obtain the step response in the time domain, we calculate the _" transform:
%
(2.6.15)
3!
The analytical calculation is a pure routine of algebra, but it would require some 8
pages to present. Readers who are interested in the details, can find it in Appendix 2.3.
Here we will write only the result:
ga>b "
where:
A a5" 5$ b# a=#" =$ b
K"
5$# =#$
A# =#" B#
K$
5"# =#"
C# =#$ B#
(2.6.16)
B # a5" 5$ b
C a5" 5$ b# a=#" =$ b
)" arctan
=" aA 5" Bb
5" A =#" B
)$ arctan
=$ aC 5$ Bb
5$ C =#$ B
(2.6.17)
Note: The angles )" and )$ calculated by the arctangent will not always give a correct
result. Depending on the pole pattern, one or both will require an addition of 1 radians, as
we show in Appendix 2.3. In the following relations we give the correct values.
By inserting the normalized values for poles, and the normalized time >X , where
X VGi G, we obtain the following step response functions:
a) MFA response (Butterworth poles)
ga>b " #%"%# e%"#"$ >X sin"(!(" >X !()&% 1
!**') e"(!(" >X sin%"#"$ >X !()&% 1
b) MFED response (Bessel poles)
ga>b " &''$# e%($"( >X sin"%"'( >X !%)'' 1
"'%)% e$%$(' >X sin%$%"* >X "&$)*
c) Response for the poles of Group A
ga>b " #(#$$ e$)$$# >X sin"()(% >X !')!( 1
!(&)( e#"!#% >X sin&!!"$ >X "##&! 1
d) Response for the poles of Group C
ga>b " &*)(& e$$#&# >X sin!&)'$ >X !#)%$ 1
!($"! e"(#)% >X sin$)%(& >X ""*#! 1
-2.70-
P. Stari, E. Margan
response type
MFA
MFED (4th -order Bessel)
Group A
Group C
Chebyshev 0.05
Gaussian to 12 dB
Double 2nd -order Bessel
5
!&%'*
!&(")
!&$$$
!&*!"
!&$&)
!%*!%
!&!!!
7
"(!("
!'%))
"''%(
"%)((
""$%$
"!)))
!$$$$
8
%)#)$
$%'!)
'!!!!
'!!!!
%(*!#
'%$&(
#!!!!
Gb G
!!($#
!!')"
!"''(
!!'%%
!#!))
!"&&%
!!)$$
(b
%%'
$%(
%%!
%(#
%!*
$("
#*#
(r
%!#
$%'
%!)
%"&
$&#
$%$
#*'
$ [%]
"!*
!*!
"*!
'#!
$&'
!%(
!%%
Thus we have concluded the section on four-pole L+T peaking networks. Here we
have discussed the geometrical synthesis in a very elementary way, which can be briefly
explained as follows:
If the main capacitance G loading the T-coil network tap is known and the
loading resistor V is selected upon the required gain, we can based on the pole data
and the geometrical relations of their real and imaginary parts calculate all the
remaining circuit parameters for the complete L+T network.
As we shall see later in the book, the same procedure can be used to calculate the
circuit parameters for a multi-stage amplifier by implementing the peaking networks
described so far.
-2.71-
P. Stari, E. Margan
1.2
a
d
1.0
b
ii R
L=0
Li = 0
0.8
0.6
L = R2 C
L i = mR 2Ci
h = 1/ R ( C + C i )
Cb
Li
ii
Ci
0.4
a)
b)
c)
d)
0.2
0.0
0.0
0.5
1.0
C
m
1.71
0.65
1.49
1.66
R
C /C i
4.83
3.46
6.00
6.00
k
0.55
0.57
0.59
0.53
1.5
t /R (C + Ci )
2.0
Cb /C
0.073
0.068
0.064
0.067
2.5
3.0
Fig. 2.6.7: Four-pole L+T circuit step response: a) MFA; b) MFED; c) Group C; d) Group A.
1.2
1.0
f
g
ii R
L=0
Li = 0
0.8
0.6
Cb
Li
ii
0.2
e)
f)
g)
0.0
0.0
0.5
1.0
k
o
Ci
0.4
L = R2C
L i = mR 2Ci
h = 1/ R ( C + C i )
C
m
1.13
1.09
0.33
1.5
t /R ( C + Ci)
k
0.54
0.49
0.50
2.0
C /C i
4.79
6.43
2.00
Cb /C
0.076
0.085
0.083
2.5
3.0
Fig. 2.6.8: Some additional four-pole L+T circuit step responses: e) Chebyshev 0.05
f) Gaussian to 12dB g) double 2nd -order Bessel. Again, the step response confirms
that repeating the poles is not optimal compare the rise times of g) and b) in Fig. 2.6.7.
-2.72-
P. Stari, E. Margan
ii
L
R
iL
o
Fig. 2.7.1: A shunt peaking network. It has two poles and one zero.
Zo Mi ^ Mi
V 4 =P
4 =G
V 4 =P
"
4 =G
(2.7.1)
V 4 =P
" 4 =VG =# PG
P 7 V# G
=h
and
-2.73-
"
VG
(2.7.2)
P. Stari, E. Margan
4=
" 7
V 4 = 7V # G
=h
^=
V
#
4=
#
# #
4
=
=
"
= 7V G
"
7
=h
=h
=h
(2.7.3)
" 7#
=h
J a=b
#
%
=
=
#
"
a
"
#7
b
7
=h
=h
(2.7.4)
We shall first find the value of the parameter 7 for the MFA response. In this case
the factors at ==h # in the numerator and in the denominator must be equal [Ref. 2.4]:
" #7 7#
7 # " !%"%"
(2.7.5)
" !"("'
=h
J =
#
%
=
=
" !"("'
!"("'
=h
=h
(2.7.6)
:a=b arctan
eJ =
d J =
where J = can be derived from Eq. 2.7.3 by making the denominator real. This is done by
multiplying both the numerator and the denominator by the complex conjugate value of the
denominator: " 7==h 4 ==h .
-2.74-
P. Stari, E. Margan
" 47
J =
=
=
=
" 7
4
=h
=h
=h
# #
=
=
" 7 = =
h
h
a
W
(2.7.7)
Next we multiply the brackets in the numerator and separate the real and imaginary parts:
a " 4 a7 "b
=
=
#
7
=h
=h
(2.7.8.)
By dividing the imaginary part of J = by its real part, W cancels out from the phase:
: arctan
$
e a
=
=
arctan a7 "b
7#
=h
=h
d a
(2.7.9)
By inserting 7 !%"%" (Eq. 2.7.5), we would get the phase response of the MFA
case, as plotted in Fig. 2.7.3, curve a. But for the MFED response, the correct value of 7
must be found from the envelope delay, so we must calculate .:.= from Eq. 2.7.9:
7e
.:
.
=
=
#
arctan a7 "b
7
.=
.=
=h
=h
a7 "b $ 7#
=
=h
$ #
=
=
#
" a7 "b
7
=h
=h
"
=h
(2.7.10)
Let us square the bracket in the denominator, factor out 7 " and multiply both sides of
the equation by =h in order to obtain the normalized envelope delay:
a7 "b"
7e =h
" 7 "#
=
$7#
7 " =h
=
=
=
#
%
#7 a7 "b
7
=h
=h
=h
(2.7.11)
$7# 7# #7 "" 7
(2.7.12)
"7
Finally:
7$ $ 7 " !
7 !$###
-2.75-
(2.7.13)
P. Stari, E. Margan
The only real solution of this equation is 7 !$### . If we put it into Eq. 2.7.4 for
the frequency response, Eq. 2.7.9 for phase response and Eq. 2.7.11 for envelope delay, we
can make the plots b in Fig. 2.7.2, 2.7.3, and 2.7.4.
Now we have enough data to calculate both poles and the zero for the MFA and
MFED case, which we shall also need to calculate the step response. But we still have to
find the value of 7 for the critical damping (CD) case. We can derive it from the fact that
for CD all the poles are real and equal. To find the poles, we take Eq. 2.7.3, divide it by V,
and replace the normalized fequency 4 ==h with the complex variable =:
^ a=b
"7=
J =
V
" = 7 =#
=
=# =
"
7
(2.7.14)
"
"
7
7
"
"
"
" %7
" 4%7 "
#7
#7
#7
(2.7.15)
=$n 5$n
(2.7.16)
Since the poles are usually complex, we have written the complex form in the
solution of the quadratic equation (Eq. 2.7.15). However, for CD, the solution must be real,
so the expression under the square root must be zero and this gives 7 !#&. The curves
corresponding to CD in Fig. 2.7.2, 2.7.3, and 2.7.4 are marked with the letter c.
Note that, in spite of the higher cut off frequency, all the curves have the same high
frequency asymptote as the first-order response.
2.0
Vo
Ii R
1.0
c
0.7
0.5
Ii
Vo
L
R
L=0
a ) m = 0.41
b ) m = 0.32
c ) m = 0.25
0.2
L = mR 2C
0.1
0.1
0.2
h = 1/ RC
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.7.2: Shunt peaking circuit frequency response: a) MFA; b) MFED; c) CD. As usual, the
non-peaking case (P ! is the reference. The system zero causes the high-frequency asymptote
to be the same as for the non-peaking system.
-2.76-
P. Stari, E. Margan
0
Ii
10
a ) m = 0.41
b ) m = 0.32
c ) m = 0.25
20
30
Vo
L
R
L=0
40
[ ]
50
L = mR 2C
h = 1/ RC
60
70
80
90
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.7.3: Shunt peaking circuit phase response: a) MFA; b) MFED; c) CD. The non-peaking
case (P ! is the reference. The system zero causes the high frequency phase to be 90, the
same as for the non-peaking system.
Ii
0.2
0.4
Vo
L
R
L = mR 2C
a ) m = 0.41
b ) m = 0.32
c ) m = 0.25
L=0
h = 1 / RC
e h
0.6
a
0.8
1.0
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.7.4: Shunt peaking circuit envelope delay: a) MFA; b) MFED; c) CD. The non-peaking
case (P ! is the reference. The peaking systems have a higher bandwidth, and consequently
a lower delay at DC than the non-peaking system.
-2.77-
P. Stari, E. Margan
c) for CD response:
the double pole ="n,#n 5"n #
With these data we can calculate the step response. At first we calculate the MFA
and MFED responses, where in both cases we have two complex conjugate poles and one
real zero. The general expression for the frequency response is:
J =
=" =# = =3
=3 = =" = =#
(2.7.17)
We multiply this equation by the unit step operator "= and obtain a new function:
K=
=" =# = =$
= =$ = =" = =#
(2.7.18)
To calculate the step response in the time domain we take the _" transform:
g> _" K> ! res
=" =# = =$ e=>
= =$ = =" = =#
(2.7.19)
=
=
=
=
=
=
=" =# =$
= p!
$
"
#
=" =# = =$ e=>
=# =" =$ =" >
res" lim = ="
e
= p ="
= =$ = =" = =#
=$ =" =#
=" =# = =$ e=>
=" =# =$ =# >
res# lim = =#
e
= p =#
= =$ = =" = =#
=$ =# ="
(2.7.20)
Since the procedure is the same as for the previous circuits, we shall omit some
intermediate expressions. After inserting all the pole components, the sum of residues is:
ga>b "
(2.7.21)
where A =#" 5" 5" 5$ and B =" 5$ . After factoring out e5" > B we obtain:
ga>b "
e5 " >
A 4 B 4 =" >
A 4 B 4 =" >
e
e
B
#4
#4
-2.78-
(2.7.22)
P. Stari, E. Margan
The expression in parentheses can be simplified by sorting the real and imaginary parts:
A 4 B 4 =" >
A 4 B 4 =" >
e4 =" > e4 =" >
e4 =" > e4 =" >
e
e
A
B
#4
#4
#4
#
A sin =" > B cos =" >
A# B # sina=" > "b
where:
" arctan
(2.7.23)
B
A
Again we have written " in order not to confuse it with the pole angle ). And here, too, we
will have to add 1 radians to " wherever appropriate, owing to the 1 period of the
arctangent function. By entering Eq. 2.7.23 into Eq. 2.7.22 and inserting the poles, we
obtain the general expression:
g> "
(2.7.24)
where:
" arctan
=" 5$
1
=#" 5" 5" 5$
(2.7.25)
We now need the general expression for the step response for the CD case, where
we have a double real pole =" and a real zero =$ . We start from the normalized frequency
response function:
=#" a= =$ b
J a=b
(2.7.26)
=$ a= =" b#
which must be multiplied by the unit step operator "=, thus obtaining:
Ka=b
=#" a= =$ b
= =$ a= =" b#
(2.7.27)
=#" a= =$ b e=>
= =$ a= =" b#
="# =$
=# = "
" $
(2.7.28)
=#"
=>
.
a= = $ b e
="
=" >
a= =" b#
res" lim
=" > " = " e
= p=" .=
= =$ a= =" b#
$
If we express the poles in the second residue with their real and imaginary parts and
take the sum of both residues, we obtain:
ga>b " e 5" > 5" > "
-2.79-
5"
"
5$
(2.7.29)
P. Stari, E. Margan
Finally we insert the normalized numerical values for the poles, the zeros and the
time variable >X >VG . The step response is:
a) for MFA (5"n "#!(" , ="n !)"!& , 5$n #%"%#)
g> " "!)!% e"#!(" >X sin!)"!& >X "")#' 1
b) for MFED (5"n "&&") , ="n !&$(% ,
5$n $"!$()
1.0
ii R
0.8
c
L=0
ii
0.6
0.4
0.2
0.0
t / RC
a ) m = 0.41
b ) m = 0.32
c ) m = 0.25
L = mR 2C
h = 1/ RC
Fig. 2.7.5: Shunt peaking circuit step response: a) MFA; b) MFED; c) CD. The non-peaking
case (P ! is the reference. The difference between the number of poles and the number of
zeros is only 1 for the shunt peaking systems, therefore the starting slope of the step response
is similar to that of the non-peaking first-order system.
-2.80-
P. Stari, E. Margan
Fig. 2.7.6 shows the pole placements for the three cases. Note the placement of the
zero, which is farther from the origin for those systems which have the poles with lower
imaginary part.
s1,2 =
1
2 mRC
a ) m = 0.4142
( 1 1 4 m )
1
s 3 = mRC
s1
j
s1
= 135
2.41
RC
s3 2
j RC
1
RC
= 150
s2
b ) m = 0.3222
s3
3.104
RC
2
RC
1
RC
c ) m = 0.2500
s2
= 180
s1
s2
2
RC
s3
-4
RC
3
RC
1
RC
Fig. 2.7.6: Shunt peaking circuit placement of the poles and the zero: a) MFA; b) MFED;
c) CD. Note the position of the zero =$ at the far left of the real axis. Although far from the
poles, its influence on the response is notable in each case.
We conclude the discussion with Table 2.7.1, in which all the important two-pole
shunt peaking circuit parameters are listed.
Table 2.7.1
response type
a) MFA
b) MFED
c) CD
7
!%"%"
!$###
!#&!!
(b
"(#
"&)
"%%
(r
")!
"&(
"%"
$ [%]
$!)
!%"
!!!
-2.81-
P. Stari, E. Margan
CL
Fig. 2.8.1: The shunt peaking circuit has three poles and two zeros.
"
4=G
V
"
"
"
4=P
4 = GP
V 4 = P =# PGP V
4 = G Va" =# PGP b =# P aG GP b "
(2.8.1)
P 7V # G
=h
"
VG
GP 8 G
(2.8.2)
^ a=b V
=
=
4 7
=h
=h
" 78
#
=
=
=
" 7 a" 8b
4
" 78
=h
=h
=h
(2.8.3)
(2.8.4)
Mi V
V
-2.83-
P. Stari, E. Margan
"
"
8
7
8
J a=b
"8
"
"
=$ =#
=
8
78
78
=# =
(2.8.5)
The magnitude J a=b J a=b J * a=b can be obtained more easily from the
impedance magnitude We start from Eq. 2.8.3, square the imaginary and real parts in the
numerator and in the denominator and take a square root of the whole fraction:
# #
#
=
=
#
"
7
8
7
=h
=h
^ a=b V
#
#
#
# #
=
=
=
"
7
"
7
8
"
8
a
b
=h
=h
=h
(2.8.6)
(2.8.7)
and here we have replaced the normalized frequency ==h with the simbol ; in order to be
able to write the equation on a single line.
For the MFA response the numerator and denominator factors at the same powers
of ==h in Eq. 2.8.7 must be equal [Ref. 2.4]. Thus we have two equations:
7# # 7 8 " # 7 " 8
(2.8.8)
7# 8# 7# " 8# # 7 8
from which we calculate the values of 7 and 8 for the MFA response:
7 !%"%
8 !$&%
and
(2.8.9)
For the MFED response the procedure for calculating the parameters 7 and 8 can
be similar to that for the two-pole shunt peaking circuit: we would first calculate the
formula for the envelope delay and equate the factors at the same powers of ==h in the
numerator and the denominator, etc. But, with the increasing number of poles, the
calculation becomes more complicated. It is much simpler to compare the coefficients of
the characteristic polynomial of the complex frequency transfer function Eq. 2.8.5.
The numerical values of the coefficients of the 3rd -order Bessel polynomial, sorted
by the falling powers of =, are: ", ', "& and "& again. Thus, we have two equations:
"8
'
8
from which we get:
"
&
and
and
-2.84-
"
"&
78
(2.8.10)
"
$
(2.8.11)
P. Stari, E. Margan
Compare these values to those from the work of V.L. Krejcer [Ref. 2.4, loc. cit.].
His values for MFED responses are:
7 !$&
8 !##
and
Krejcer also calculated the parameters for a "special" case circuit (SPEC):
7 !%&
8 !##
and
(2.8.12)
By inserting the values of parameters from Eq. 2.8.9 2.8.12 into Eq. 2.8.7, we can
calculate the corresponding frequency responses. However, for the phase, envelope delay,
and step response we also need to know the values of poles and zeros. Since we know all
the values of parameters 7 and 8, we can use Eq. 2.8.5 . We equate the denominator W of
J = to zero and find the roots, which are the three poles of J =. Similarly, by equating
the numerator a of J = to zero we calculate the two zeros (for readers less experienced
in mathematics we have reported the general solutions for polynomials of first, second and
third order in Appendix 2.1 ).
a) MFA response ( 7 !%"% and 8 !$&% ):
W
The poles:
a
The zeros:
=# #)#& = ')#$ !
=%n,&n 5&n 4 =&n "%"# 4 #"*(
a
The zeros:
=# & = "& !
=%n,&n 5&n 4 =&n #&!! 4 #*&)
a
The zeros:
=# % &%& = "!"!" !
=%n,&n 5&n 4 =&n ##$( 4 ####
By inserting the values of 7 and 8 in Eq. 2.8.7 we can calculate the frequency
response magnitude of the three cases. The resulting plots are shown in Fig. 2.8.2. Note the
high frequency asymptote, which is the same as for the non-peaking single-pole case.
-2.85-
P. Stari, E. Margan
2.0
Vo
Ii R
1.0
0.7
0.5
Ii
Vo
0.1
0.1
h = 1/ RC
CL
m
0.2
L=0
L = mR 2C
a)
b)
c)
0.414
0.333
0.450
0.2
CL /C
0.354
0.200
0.220
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.8.2: Three-pole shunt peaking circuit frequency response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P ! is the reference. The difference of the number of poles and the
number of zeros is only 1 for peaking systems, therefore the ending slope of the frequency
response is similar to that of the non-peaking system.
=
=
=&n
= &n
=h
=h
arctan
arctan
5&n
5&n
(2.8.13)
By entering the numerical values of poles and zeros we obtain the phase response
equations for each case. In Fig. 2.8.3 the corresponding plots are shown.
2.8.3 Envelope Delay
We use Eq. 2.2.34, adding a term for each pole and subtracting for each zero:
7e =h
5"n
5"n
5$ n
#
#
#
=
=
=
="n
="n
5"#n
5"#n
5$#n
=h
=h
=h
5&n
5&n
(2.8.14)
#
#
=
=
#
#
5&n
=&n
5&n
=&n
=h
=h
-2.86-
P. Stari, E. Margan
Again we insert the numerical values for poles and zeros in Eq. 2.8.14 to plot the
envelope delay as shown in Fig. 2.8.4. As we have explained in Fig. 2.2.6, there is an
envelope advance (owed to system zeros) in the high frequency range.
0
Ii
10
Vo
20
L = mR 2C
h = 1/ RC
CL
30
40
[]
50
a)
b)
c)
60
0.414
0.333
0.450
CL /C
0.354
0.200
0.220
L=0
70
80
90
0.1
0.2
0.5
1.0
/ h
c b
2.0
5.0
10.0
Fig. 2.8.3: Three-pole shunt peaking circuit phase response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P ! is the reference. The system zeros cause the phase response
bouncing up at the 90 boundary and then returning back.
0.2
0.0
Ii
Vo
R
L
L = mR 2C
h = 1/ RC
C
CL
0.2
e h
0.4
CL /C
m
a)
b)
c)
0.414
0.333
0.450
0.354
0.200
0.220
L=0
0.6
c
0.8
1.0
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.8.4: Three-pole shunt peaking circuit envelope delay: a) MFA; b) MFED; c) SPEC.
The non-peaking (P ! case is shown as the reference. Note the envelope advance in the
high frequency range, owed to system zeros.
-2.87-
P. Stari, E. Margan
J =
=" =# =$ = =% = =&
=% =& = =" = =# = =$
(2.8.15)
We multiply this function by the unit step operator "= and obtain a new equation:
K=
"
=" =# =$ = =% = =&
= =% =& = =" = =# = =$
(2.8.16)
The step response g> is fully derived in Appendix 2.3. The result is:
g> "
(2.8.17)
=" B
1
A
K"
5$
a5&# =&# bc5" 5$ # ="# d
K$
(2.8.18)
By inserting the numerical values of the poles and zeros we obtain the following relations:
a) MFA response ( 7 !%"% and 8 !$&% ):
g> " !&&($ e!)&! >X sin"&(( >X !((%" 1 !'"!% e#"#& >X
b) MFED response ( 7 !$$$ and 8 !#!! ):
g> " !)!&% e")$* >X sin"(&% >X !"((# 1 ""%#! e#$## >X
c) Special case ( 7 !%& and 8 !## ):
g> " !))"% e"!$& >X sin"$&& >X "!$$$ 1 !#%#* e$%(& >X
The plots of these responses can be seen in Fig. 2.8.5 Because the difference
between the number of poles and zeros is one only, the initial slope of the response is the
same as for the non-peaking response.
-2.88-
P. Stari, E. Margan
1.2
c
a
b
1.0
o
ii R
L=0
0.8
ii
0.6
L = mR 2C
h = 1/ RC
CL
0.4
CL /C
m
a)
b)
c)
0.2
0.0
t / RC
0.414
0.333
0.450
0.354
0.200
0.220
4
Fig. 2.8.5: Three-pole shunt peaking circuit step response: a) MFA; b) MFED; c) SPEC. The
non-peaking case (P ! is the reference. The initial slope is similar to the non-peaking
response, since the difference between the number of system poles and zeros is one only.
We conclude the discussion of the three-pole and two-zero shunt peaking circuit by
the Table 2.8.1, which gives all the important circuit parameters.
Table 2.8.1
response type
a) MFA
b) MFED
c) SPEC
7
!%"%
!$$$
!%&!
8
!$&%
!#!!
!##!
(b
")%
"($
")&
(r
")(
"(&
"*$
$ [%]
("
!$(
(!
-2.89-
P. Stari, E. Margan
ii
i1
Ci
L1
i2
Although the improvement of the bandwidth and rise time in a shuntseries peaking
circuit exceeds that of a pure series or pure shunt peaking circuit, the improvement factors
just barely reach the values offered by the three-pole T-coil circuit, which is analytically
and practically much easier to deal with; not to speak of the improvement offered by the
L+T network, which is substantially greater. This circuit has been extensively treated in
literature [Ref. 2.4, 2.25, 2.26]. The calculation of the step response for this circuit can be
found in Appendix 2.3, so we shall give only the essential relations.
We start the analysis by calculating the input impedance:
^i
where:
M"
Zi
"
= Gi
Zi
Zi
Mi
M" M# M$
M#
Zi
V = P"
M$
(2.9.1)
Zi
"
=P#
=G
(2.9.2)
By introducing this into Eq. 2.9.1 and eliminating the double fractions we get:
^i
V = P" " =# P# G
= Gi V = P" " =# P# G =# P# G " = GV = P"
"
=G
"
= P#
=G
Mi ^i
"
=# P# G "
(2.9.3)
(2.9.4)
We insert Eq. 2.9.3 for ^i , cancel the =# P# G " terms and extract V from the numerator:
Zo M i V
" = P" V
= Gi V = P" " =# P# G =# P# G " = GV = P"
(2.9.5)
-2.91-
P. Stari, E. Margan
(divide by the coefficient at the highest power of =) first in the numerator (because it is
easy) and then in the denominator:
P"
V
=
Zo
V
P"
Mi V
= Gi V =# Gi P" " =# P# G =# P# G " = G V = P"
(2.9.6)
P"
V
=
V
P"
%
= P" P# GGi =$ P# GGi V =# Gi P" =# P# G =# P" G = GV = Gi V "
"
P"
V
=
P" P # G G i
V
P"
V
P# G P " G i P " G
V aGi G b
"
=% = $
=#
=
P"
P" P # G G i
P" P # G G i
P" P# G Gi
Since we would like to know how much we can improve the bandwidth with
respect to the non-peaking circuit (inductances shorted), let us normalize the transfer
function to =h "VGi G by putting V " and Gi G ". To simplify the
expressions, we introduce the following parameters:
G
G Gi
7"
P"
V# G Gi
7#
P#
V # G Gi
(2.9.7)
Gi " 8
P" 7"
P# 7#
(2.9.8)
"
"
7" =
7" 7# 8 " 8
7"
J =
"
7 # 8 7"
"
"
=% = $
=#
=
7"
7" 7# 8 " 8
7" 7# 8 " 8
7" 7# 8 " 8
(2.9.9)
Now we compare this with the generalized four-pole one-zero transfer function:
J =
"% =" =# =$ =%
= =&
(2.9.10)
"
7"
"
7" 7# 8 " 8
(2.9.11)
(2.9.12)
-2.92-
(2.9.13)
P. Stari, E. Margan
where:
+ a=" =# =$ =% b
, =" = # = " = $ = " = % = # = $ = # = % = $ = %
(2.9.14)
- a=" =# =$ =" =# =% =" =$ =% =# =$ =% b
. =" = # = $ = %
By comparing the coefficients at equal powers of = , we note that:
"
7"
7# 8 7"
7" 7# 8 " 8
-.
"
7" 7# 8 " 8
(2.9.15)
For the MFED response the coefficients of the fourth-order Bessel polynomial (which we
obtain by running the BESTAP routine in Part 6) have the following numerical values:
+ "!
, %&
So, from +:
- "!&
. "!&
(2.9.16)
71 !"
From , and - :
"
- 7" 7 #
(2.9.17)
7#
8 8#
,
7"
8
(2.9.18)
8
"!& !"
%&
!"
"!&
8
!
$%&
"
8" 8
!
$%&
8 8#
And, since 8 !:
8"
"
!("!"
$%&
(2.9.19)
(2.9.20)
,
7"
%&
!"
"!&
!%'#(
!("!"
(2.9.21)
The component values for the MFED transfer function will be:
G 8Gi G !("!" Gi G
Gi " 8G Gi !#)** Gi G
P" 7" V # G Gi !" V # Gi G
P# 7# V# G Gi !%'#( V # Gi G
-2.93-
(2.9.22)
P. Stari, E. Margan
The MFED poles ="% (BESTAP routine, Part 6) and the zero =& (Eq. 2.9.11) are:
="n,#n ="t,#t #)*'# 4 !)'(#
=$n,%n =$t,%t #"!$) 4 #'&(%
=&n "!!!!
(2.9.23)
For the MFA we can use the same procedure as in Sec.2.3.1, but since we have a
system of 4th -order we would get an 8th -order polynomial and, consequently, a complicated
set of equations to solve. Instead we shall use a simpler approach (which, by the way, can
be used in any other case). We must first consider that our system will have a bandwidth
larger than the normalized Butterworth system. Let (b be the proportionality factor between
each normalized Butterworth pole and the shuntseries peaking system pole:
= 5 (b = 5 t
(2.9.24)
th
The normalized 4 -order Butterworth system poles (see Part 6, BUTTAP routine) are:
="t,#t !$)#( 4 !*#$*
=$t,%t !*#$* 4 !$)#(
(2.9.25)
#'"$"
$%"%#
#'"$"
"!!!!
(2.9.26)
The polynomial coefficients +, ,, - and . of the shunt-series peaking system are then:
+ (b a="t =#t =$t =%t b (b #'"$"
, (b# a="t =#t ="t =$t ="t =%t =#t =$t =#t =%t =$t =%t b (b# $%"%#
- (b$ a="t =#t =$t ="t =#t =%t ="t =$t =%t =#t =$t =%t b (b$ #'"$"
(2.9.27)
(b #'"$"
"
7"
(b$ #'"$"
"
7" 7# 8 " 8
(b# $%"%#
7# 8 7"
7" 7# 8 " 8
(b%
"
7" 7# 8 " 8
(2.9.28)
(b #'"$"
(2.9.29)
Effectively, the pole multiplication factor is equal to the MFA bandwidth extension. From
the first equation of 2.9.28 we can now calculate 7" :
7"
"
"
# !"%'%
(b
#'"$" (b
(2.9.30)
-2.94-
P. Stari, E. Margan
From the last equation of 2.9.28 we can establish the relationship between 7# and 8:
(b%
"
7" 7# 8 " 8
7#
"
(b# 8 " 8
(2.9.31)
7# 8 7"
7" 7# 8 " 8
(2.9.32)
"
8
"
#
#
(b#
(b 8 " 8
(b
8 "
"
"
a" 8b
"
!&)&)
a$%"%# "b
(2.9.33)
7#
"
!'!$'
(b# 8 " 8
(2.9.34)
(2.9.35)
=&n ')#)$
and the MFA coefficients are:
"!!!!
')#)%
#$$"$#
%''#'!
%''#'!
(2.9.36)
-2.95-
P. Stari, E. Margan
Let us insert these data into Eq. 2.9.9 and calculate the poles and the zero
+) MFA by PS/EM ( 7" !"%'% , 7# !'!$' , and 8 !&)&) ):
=% ')#)% =$ #$$"$# =# %''#'! = %''#'! !
The poles are:
J =
"
5&# ;#
5&
where again ; ==h . In Fig. 2.9.2 we have ploted the responses resulting from this
equation by inserting the values of the poles and the zero for our MFA and MFED.
-2.96-
P. Stari, E. Margan
2.0
Vo
Ii R
1.0
b
0.7
L2
Ii
0.5
Ci
L1
Vo
L1 = L2 = 0
n = C / (C + C i )
L 1 = m 1 R 2(C + C i )
L 2 = m 2 R 2(C + C i )
0.2
a)
b)
0.1
0.1
h = 1 / R (C + C i )
m1
m2
0.146
0.100
0.604
0.463
0.2
0.586
0.710
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.9.2: The shuntseries peaking circuit frequency-response: a) MFA; b) MFED. Note
the MFA not being exactly maximally flat, owing to the system zero.
=
="n
=h
:a=b arctan
arctan
5"n
=
= $n
=h
arctan
arctan
5 $n
=
="n
=h
5"n
=
=
= $n
=h
=h
arctan
5$n
5&n
(2.9.38)
By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.3 .
2.9.3 Envelope Delay
By Eq. 2.2.34, for responses a) and b) we obtain:
5"n
7e =h
5"#n
=
="n
=h
5$n
=
5$#n
=$n
=h
5"n
5"#n
=
="n
=h
5$n
=
5$#n
=$n
=h
-2.97-
5&n
5&#n
=
=h
(2.9.39)
P. Stari, E. Margan
By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.4. Again, as in pure shunt peaking, we have different
low frequency delays for each type of poles, owing to the different normalization.
0
30
L1 = L 2 = 0
60
[ ]
90
120
L2
Ii
150
Ci
Vo
L1
C
L 2 = m 2 R 2 (C + C i )
210
240
a)
b)
270
0.1
n = C / (C + C i )
L 1 = m 1 R 2 (C + C i )
180
h = 1 / R (C + C i )
m1
m2
0.146
0.100
0.604
0.463
0.2
b
a
n
0.586
0.710
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.9.3: The shuntseries peaking circuit phase response: a) MFA; b) MFED.
0.0
L2
Ii
0.2
Ci
0.4
e h
0.6
Vo
L1
a)
b)
h = 1 / R (C + C i )
n = C / (C + C i )
L 1 = m 1 R 2 (C + C i )
L 2 = m 2 R 2 (C + C i )
m1
m2
0.146
0.100
0.604
0.463
L1 = L2 = 0
n
0.586
0.710
0.8
1.0
1.2
1.4
0.1
0.2
0.5
1.0
/ h
2.0
5.0
Fig. 2.9.4: The shuntseries peaking circuit envelope delay: a) MFA; b) MFED.
-2.98-
10.0
P. Stari, E. Margan
J =
=" =# =$ =% = =&
=& = =" = =# = =$ = =%
(2.9.40)
To get the step response in the = domain, we multiply J = by the unit step operator "=:
K=
=" =# =$ =% = =&
= =& = =" = =# = =$ = =%
(2.9.41)
The step response in the time domain is obtained by taking the _" transform:
ga>b _" eKa=bf ! res
(2.9.42)
This formula requires even more effort than was spent for the L+T network. We
shall skip the lengthy procedure (which is presented in Appendix 2.3) and give only the
solution, which for all the listed poles and zeros is:
g> "
where:
K"
e5" > cM sin=" >X =" N cos(=" >X d
5& ="
K$
e5$ > cP sin=$ >X =$ Q cos=$ >X d
5& =$
(2.9.43)
(2.9.44)
whilst A, B, C, K" and K$ are the same as for the L+T network (Eq. 2.6.17).
The plots in Fig. 2.9.5 and Fig. 2.9.6 were calculated and drawn by using these
formulae.
Let us now compare the MFED response with those obtained by Braude and Shea.
The step response relation is the same for all three systems (Eq. 2.9.43, 2.9.44), but the pole
and zero values are different. As it appears from the comparison of the characteristic
polynomial coefficients and even more so from the comparison of the poles and zeros, the
three systems were optimized in different ways. This is evident from Fig. 2.9.7.
Although at first glance all three step responses look very similar (Fig. 2.9.6), a
closer look reveals that the Braude case has an excessive overshoot. The Shea case has the
steepest slope (largest bandwidth), but this is paid for by an extra overshoot and ringing.
The Bessel system has the lowest transient slope; however, it has the minimal overshoot
and it is the first to settle to < 0.1% of the final amplitude value (in about #( >X ).
-2.99-
P. Stari, E. Margan
1.2
a
b
1.0
o
ii R
L1 = L2 = 0
0.8
L2
ii
0.6
Ci
h = 1 / R (C + C i )
L1
n = C / (C + C i )
L 1 = m 1 R 2 (C + C i )
L 2 = m 2 R 2 (C + C i )
R
0.4
0.2
0.0
a)
b)
0
t / R (C + C i )
m1
m2
0.146
0.100
0.604
0.463
0.586
0.710
Fig. 2.9.5: The shuntseries peaking circuit step response: a) MFA; b) MFED.
ii R
1.0
b
a
0.8
L1 = L2 = 0
10 o
ii R
1.02
0.6
Vertical scale 10
0.4
1.00
0.98
0.2
L1 = L2 = 0
0.0
m1
m2
a ) Shea 0.133
b ) Braude 0.122
c ) Bessel 0.100
t / R (C + C i )
0.96
n
0.94
0.467 0.667
0.511 0.656 0.92
0.464 0.710 0.90
Fig. 2.9.6: The MFED shuntseries peaking circuit step response: a) by Shea; b) by Braude;
c) a true Bessel system. The 10 vertical scale expansion shows the top 10 % of the response.
The overshoot in the Braude case is excessive, whilst the Shea version has a prolonged
ringing. Although slowest, the Bessel system is the first to settle to < 0.1 % of the final value.
-2.100-
P. Stari, E. Margan
The pole layout in Fig. 2.9.7 confirms the statements above. In the Braude case the
two poles with the smaller imaginary part are too far from the imaginary axis to
compensate the peaking of the two poles closer, so the overshoot is inevitable. The Shea
case has the widest pole spread and consequently the largest bandwidth, but the two poles
with the lower imaginary part are too close to the imaginary axis (this is needed in order to
level out the peaks and deeps in the frequency response). As a consequence, whilst the
overshoot is just acceptable, there is some long term ringing, impairing the systems
settling time. The Bessel system pole layout follows the theoretical requirement. In spite of
the presence of the zero (located far from the poles, the farthest of all three systems), the
system performance is optimal.
j
poles :
Shea
Braude
Bessel
s 1,2 = 2.6032
s 1,2 = 1.4951
s 1,2 = 2.8976
s 3,4 = 2.1024
j 0.9618
j 2.6446
Bessel
Braude
j 0.8649
j 2.6573
Shea
zeros ( s 5 )
10
Bessel
10.0
Braude Shea
8.2 7.5
Fig. 2.9.7: The MFED shuntseries peaking circuit pole loci of the three different systems.
The zero of each system is too far from the poles to have much influence. It is interesting how
a similar step response can be obtained using three different optimization strategies. Strictly
speaking, only the Bessel system is optimal.
Let us conclude this section with Table 2.9.1, in which we have collected all the
design parameters, in addition to the bandwidth and rise time improvements and the
overshoots for the cases discussed.
Table 2.9.1
response type
a) MFA
b) MFED
c) MFED
d) MFED
author
PS/EM
PS/EM
Shea
Braude
7"
!"%'%
!"!!!
!"$$
!"##
7#
!'!$'
!%'#(
!%'(
!&""
8
!&)&)
!("!"
!''(
!'&'
(b
#'"
#")
#%%
#&!
-2.101-
(r
#(#
##"
#$*
#$'
$ [%]
"##$
!*!
")'
%%&
P. Stari, E. Margan
-2.102-
P. Stari, E. Margan
-2.103-
P. Stari, E. Margan
2.0
Vo
Ii R
1.0
b
0.7
0.5
a
d
0.2
0.1
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 2.10.1: MFA frequency responses of all the circuit configurations discussed. By far the
4-pole T-coil response i) has the largest bandwidth.
1.2
1.0
ii R
0.8
f
MFED Inductive Peaking
a ) no peaking (single-pole)
0.6
b ) series, 2-pole
c ) series, 3-pole
d ) shunt, 2-pole, 1-zero
e ) shunt, 3-pole, 1-zero
0.4
0.2
0.0
0.0
0.5
1.0
t /T
1.5
2.0
2.5
3.0
Fig. 2.10.2: MFED step responses of all the circuit configurations discussed. Again, the 4-pole
T-coil step response i) has the steepest slope, but the 3-pole T-coil response h) is close.
-2.104-
P. Stari, E. Margan
1.0
o
ii R
C b
+25%
25%
3 0
+10%
10%
3 0
+10%
10%
t/T
3 0
+10%
10%
1.0
Cb
ii R
ii
L i
+40%
40%
Ci
3 0
Li
Ci
+40%
T = R ( C + Ci )
L = R2 C
L i = mR 2Ci
40%
C /C i
C b /C
0.65
0.57
3.46
0.068
Fig. 2.11.1: Four-pole L+T peaking circuit step response sensitivity on component tolerances.
Such graphs were drawn originally by Carl Battjes for a class lecture at Tektronix, but with
another set of parameters as the reference. The responses presented here were obtained using the
MicroCAP5 circuit analysis program. [Ref.2.36].
-2.105-
P. Stari, E. Margan
These figures prove that the inductance P, the coupling factor 5 , and the loading
capacitance G must be kept within close tolerances in order to achieve the desired
performance, whilst the tolerances of the bridging capacitance Gb of the T-coil, the input
coil Pi , and input capacitance Gi are less critical. Therefore, the construction of a properly
calculated T-coil is not a simple matter. In some respect it resembles a digital AND
function: only if all the parameters are set correctly will the result be an efficient peaking
circuit. There is not much rom for compromise here.
In the serial production of wideband amplifiers there are always some tolerances of
stray capacitances, so the T-coils must be made adjustable. Usually the coils are wound on
a simple polystyrene cylindrical coil form, with a threaded ferrite core inside. By adjusting
the core the required inductance can be set. However, the coupling factor 5 depends only
on the coil length to diameter ratio (6H) [Ref. 2.33] and it is independent of whether the
coil has a ferrite core inside or not. The relation between the coupling factor 5 and the
ratio 6H is shown in the diagram in Fig. 2.11.2, which is valid for the center tapped
cylindrical coils.
1.0
N = 2n
k
n
0.5
0.2
0.1
0.01
0.02
0.05
0.10
0.20
l /D
0.50
1.00
2.00
5.00
10.00
Fig. 2.11.2: T-coil coupling factor 5 as a function of the coils length to diameter ( 6H ) ratio.
The coil inductance P depends on the number of turns, the length to diameter ratio,
set by the coil form on which the coil is wound, and on the ferrite core if any is used; both
the coil form and the core can be obtained from different manufacturers, together with the
formulae for calculating the required number of turns. However, these formulae are often
given as some sort of cookery book receipts, with the key parameters usually in numerical
form for a particular product. As such, they are satisfactory for building general purpose
coils, but they do not offer the understanding needed to perform the optimization procedure
within a set of possible solutions.
The reader is therefore forced to look for more theoretical explanations in standard
physics and electronics text books.
-2.106-
P. Stari, E. Margan
.! R # E
6
(2.11.1)
but this is valid only for a single layer coreless coil with a homogeneous magnetic field
(such as a tightly wound toroid or a long coil). The parameters represent:
P
.!
the area encircled by one wire turn, measured from the wire central path; for a
cylindrical coil, E 1 V# 1 H# %, where V is the radius and H is the
diameter, both in meters [m];
the total coil length in meters [m]; if the turns are wound adjacent to one
another with a wire of a diameter . , then 6 R ..
The main problem with the Eq. 2.11.1 is the term homogeneous; this implies a
uniform magnetic field, entirely contained within the coil, with no appreciable leakage
outwards. Toroidal coils are not easy to build and can not be made adjustable, so in
practice cylindrical coils are widely used. For a cylindrical coil the magnetic field is of the
same form as that of a stick magnet: the field lines close outside the coil and at both ends
the field is non-homogeneous. Because of this, the inductance is reduced by a form factor
', which is a function of the ratio H6 ( Fig. 2.11.3 ).
An important note for T-coil production: the form factor, and consequently the
inductance, increase with H and decrease with 6, in contrast to the coupling factor 5 . This
additionally restricts our choice of H and 6.
Also, if the coil is going to be adjustable the relative permeability of the core
material, .r , must be taken into account; however, only a part of the magnetic field will be
contained within the core, so we introduce the average permeability,
.r , reflecting that only
a part of the turns will encircle the core. The relative permeability of the air is 1 and that of
the ferromagnetic core material can be anything up to several hundred. However, since the
field path in air will be much longer than inside the core, the average permeability will be
rather low. Note also that the core material is slow, i.e., its permeability has an upper
frequency limit, often lower than our bandwidth requirement.
Finally, if the bridging capacitance Gb of the T-coil network has to be precisely
known, we must take into account the coils self capacitance, Gs , which appears in parallel
with the coil, with a value equivalent to a series connection of capacitances between
adjacent turns. Owing to Gs the inductance will appear lower when measured, so Gs should
also be measured and the actual inductance value calculated from the two measurements. If
the turns are tightly wound the relative permittivity &r of the wire isolation lacquer must be
considered. Its value is several times larger than for air. The lacquer thickness is also
influencing Gs . If Gs is too large it can easily be reduced by increasing the distance
between the turns by a small amount, $ , but this will also cause additional field leakage and
reduce the inductance slightly. To compensate, the number of turns can be increased;
-2.107-
P. Stari, E. Margan
because the inductance increases with R # , it will outperform the slight decrease resulting
from the larger length 6.
Multi-layer coils are less suitable for use in wideband amplifiers, because of their
high capacitance between the adjacent layers.
Fortunately wideband amplification does not require large inductance values. Also,
since the inductances are always in series with relatively large resistive loads (almost never
less than 50 H), the wire resistance and the skin effect can usually be neglected.
With all these considerations the inductance becomes:
P'
.r . ! 1 H # 6
% a. $ b#
(2.11.2)
The Fig. 2.11.3 shows the value of ' as a function of the ratio H6. The actual
function is found through elliptic integration of the magnetic field flux density, which is
too complex to be solved analytically here. But a fair approximation, fitting the
experimental data to better than 1%, can be obtained using the following relation:
+
'
+
H
6
(2.11.3)
where:
+#
, sin
1
$
10 0
10
a=2
b = sin
3
L=
r 0 D2 l
l
d +
10
N=
4 ( d + )2
0 d
10
a
a + ( D/ l ) b
10
10 0
D/ l
10 1
10 2
Fig. 2.11.3: The ' factor as a function of the coil diameter to length ratio, H6. The
equation shown in the upper right corner fits experimental data to better than 1%.
-2.108-
P. Stari, E. Margan
Inductances are susceptible to external fields, mainly from the power supply, or
other nearby inductances. The influence of a nearby inductance can be minimized by a
perpendicular orientation of coil axes. Otherwise, the circuit should be appropriately
shielded, but the shield will act as a shorted single-turn inductance, lowering the effective
coil inductance if it is too close.
In modern miniaturized, bandwidth hungry circuits the coil dimensions become
critical, and one possible solution is to construct the coil in a planar spiral form on two
sides of a printed circuit board or even within an integrated circuit. This gives the
possibility of more tightly controlled parameter tolerances, but there is no free lunch: the
price to pay is in many weeks or even months of computer simulation before a satisfactory
solution is found by trial and error, since the exact mathematical relations are extremely
complicated (a good example of how this is done can be found in the excellent article by
J.Van Hese of Agilent Technology [Ref. 2.37], where the finite element numerical analysis
method is used).
The following figures show a few examples of planar coils made directly on the
surface of IC chips, ceramic hybrids, or double-sided conventional PCBs.
-2.109-
P. Stari, E. Margan
Fig. 2.11.6: A planar T-coil with a high coupling factor, realized on a conventional
double-sided PCB. Multi-turn spiral structures are also possible, but need at least a
three-layer board for making the inner to outer turn connections.
-2.110-
P. Stari, E. Margan
References:
[2.1]
[2.2]
[2.3]
[2.4]
E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, J.D. Noe, Distributed Amplification,
Proc. I.R.E., Vol. 36, August, 1948, pp. 956969.
[2.5]
[2.6]
P. Stari, Three- and Four-Pole Tapped Coil Circuits for Wideband/Pulse Amplifiers,
Elektrotehniki vestnik, 1983, pp. 129137.
[2.7]
[2.8]
[2.9]
J.L. Addis, Good Engineering and Fast Vertical Amplifiers, Part 4, section 14,
Analog Circuit Design, edited by J. Williams,
Butterworth-Heinemann, Boston, 1991.
[2.10]
[2.11]
[2.12]
[2.13]
[2.14]
[2.15]
G.A. Korn & T.M. Korn, Mathematical Handbook for Scientists and Engineers,
McGraw-Hill, New York, 1961.
[2.16]
[2.17]
[2.18]
[2.19]
[2.20]
W.R. Horton, J.H. Jasberg, J.D. Noe, Distributed Amplifiers: Practical Consideration and
Experimental Results, Proc. I.R.E. Vol. 39, pp. 748753.
[2.21]
R.I. Ross, Wang Algebra Speeds Network Computation of Constant Input Impedance Networks,
(Internal Publication), Tektronix, Inc., Beaverton, Ore. 1968.
[2.22]
-2.111-
P. Stari, E. Margan
[2.23]
[2.24]
[2.25]
G.B. Braude, K.V. Epaneshnikov, B.J. Klymushev, Calculation of a Combined Circuit for the
Correction of Television Amplifiers, (in Russian),
Radiotekhnika, T. 4, No. 6. Moscow, 1949, pp. 2433.
[2.26]
[2.27]
[2.28]
[2.29]
[2.30]
[2.31]
[2.32]
[2.33]
[2.34]
Mathematica, Wolfram Research, Inc., 100 Trade Center Drive, Champaign, Illinois,
http://www.wolfram.com/
[2.35]
Macsyma, Symbolics, Inc., 8 New England Executive Park, East Burlington, Massachusetts, 01803,
http://www.scientek.com/macsyma/main.htm
See also Maxima (free version, GNU Public Licence):
http://www.ma.utexas.edu/users/wfs/maxima.html
[2.36]
[2.37]
J. Van Hese, Accurate Modeling of Spiral Inductors on Silicon for Wireless RF-IC Designs,
http://www.techonline.com/ Feature Articles Feature Archive November 20, 2001
See also: http://www.agilent.com/, and:
L. Knockaert, J. Sercu and D. Zutter, "Generalized Polygonal Basis Functions for the
Electromagnetic Simulation of Complex Geometrical Planar Structures," IMS-2001
[2.38]
J.N. Little and C.B. Moller, The MathWorks, Inc.: MATLAB-V For Students
(with the Matlab program on a CD), Prentice-Hall, 1998,
http://www.mathworks.com/
[2.39]
Derive, http://education.ti.com/product/software/derive/
[2.40]
MathCAD, http://www.mathcad.com/
-2.112-
P.Stari, E.Margan
Appendix 2.1
Appendix 2.1
st
+B, !
Canonical form:
B
Solution:
B
Second-order polynomial:
+ B# , B - !
Canonical form:
B#
Solutions:
B"#
,
!
+
,
+
,
B
!
+
+
, ,# % + #+
O +# $,
Q %+$ - +# ,# ")+,- %,$ #(- #
R #+$ *+, #(-
+
#
B"
O =38
$
$
+>+8
4R
$ $Q
$
B#
B$
+
O =38
$
+
O =38
$
+>+8
4R
$ $Q
$
+>+8
4R
$ $Q
$
$
$
$
$
+>+8
O -9=
+>+8
O -9=
4R
$ $Q
4R
$ $Q
$
-A2.1.1-
P.Stari, E.Margan
Appendix 2.1
-A2.1.2-
P.Stari, E.Margan
Appendix 2.1
B
+C C
!
#
E
E )C +# %,
As has been proven by the French mathematician Evariste Galois (18111832), the solutions of polynomials of order 5 or higher can not be expressed
analytically as rational functions of the polynomial coefficients. In such cases, the
roots can be found by numerical computation methods (users of Matlab can try the
ROOTS routine, which calculates the roots from polynomial coefficients by numerical
methods; see also the POLY routine, which finds the coefficients from the roots).
-A2.1.3-
P.Stari, E.Margan
Appendix 2.2
Appendix 2.2
Normalization of complex frequency response functions
or
Why do some expressions have strange signs?
Do not be afraid of mathematics!
It is probably the only rational product of the human mind!
(E. Margan)
"
(A2.2.1)
# = =3
3 "
"
R
# ! =3
3 "
"
(A2.2.2)
# =3
3 "
Jn =
J =
J !
# = =3
3 "
"
# =3
# =3
3 "
R
# = =3
3 "
3 "
-A2.2.1-
(A2.2.3)
P.Stari, E.Margan
Appendix 2.2
The numerator of the last term in Eq. A2.2.3 can be written so that the signs
are collected together in a separate product, defining the sign of the total:
R
"R # =3
Jn =
3 "
(A2.2.4)
# = =3
3 "
This means that all odd order functions must be multiplied by " in order
to have a correctly normalized expression. But please, note that the sign defining
expression "R is not the consequence of all our poles lying in the left half of the
complex plane, as is sometimes wrongly explained in literature!
In Eq. A2.2.4 the poles still retain their actual value, be it positive or negative.
The term "R is just the consequence of the mathematical operation (subtraction)
required by the function: = must acquire the exact value of =3 , sign included, if the
function is to have a pole at =3 :
= =3 !
J =3 _
= =3
(A2.2.5)
In some literature the sign is usually neglected because we are all too often
interested in the frequency response magnitude, which is the absolute value of J =,
or lJ =l. However, as amplifier designers we are interested mainly in the systems
time domain performance. If we calculate it by the inverse Laplace transform we must
have the correct sign of the transfer function, and consequently the signs of the
residues at each pole.
In addition it is important to note that a system with zeros must have the
product of zeros normalized in the same way (even if some of the systems with zeros
do not have a DC response, such as high pass and band pass systems). If our system
has poles :3 and zeros D5 , the normalized transfer function is:
Q
# = D5
"R # :3
Jn =
3 "
# = :3
3 "
5 "
"Q
(A2.2.6)
# D5
5 "
(A2.2.7)
which is, incidentally, also equal to "R Q , but there is nothing mystical about
that, really.
-A2.2.2-
P. Stari, E. Margan
Wideband Amplifiers
Part 3:
P. Stari, E. Margan
Back To Basics
This part deals with some elementary amplifier configurations
which can serve as building blocks of multi-stage amplifiers
described in Part 4 and 5, together with the inductive peaking
circuits described in Part 2.
Today two schools of thought prevail amongst amplifier
designers: the first one (to which most of the more experienced
generation belongs) says that cheap operational amplifiers can
never fulfil the conflicting requirements of good wideband design;
the other (mostly the fresh forces) says that the analog IC
production technology advances so fast that by the time needed to
design a good wideband amplifier, the new opamps on the market
will render it obsolete.
Both of them are right, of course!
An important point, however, is that very few amplifier
designers have a silicon chip manufacturing facility next door.
Those who have often discover that component size reduction
solves half of the problems, whilst packing the components close
together produces a nearly equal number of new problems.
Another important point is that computer simulation tells you
only a part of what will be going on in the actual circuit. Not
because there would be anything wrong with the computer, its
program or the circuit modeling method used, but because
designers, however experienced they are, can not take everything
into account right from the start; and to be able to complete the
simulation in the foreseeable future many things are left out
intentionally.
A third important point is that by being satisfied with the
performance offered by LEGO-tronics (playing with general
purpose building blocks, as we call it and we really do not mean
anything bad by that!), one intentionally limits oneself to a
performance which, in most cases, is an order of magnitude below
of what is achievable by the current state of the art technology.
Not to speak of there being only a limited amount of experience to
be gained by playing just with the outside of those nice little black
boxes.
A wise electronics engineer always takes some time to build a
discrete model of the circuit (the most critical part, at least) in
order to evaluate the influence of strays and parasitics and find a
way of improving it. Even if the circuit will eventually be put on a
silicon chip those strays will be scaled down, but will not
disappear.
That is why we think that it is important to go back to basics.
-3.2-
P. Stari, E. Margan
-3.3-
P. Stari, E. Margan
List of Figures:
Fig. 3.1.1: The common emitter amplifier .............................................................................................. 3.9
Fig. 3.2.1: The common base amplifier ................................................................................................ 3.17
Fig. 3.2.2: Transistor gain as a function of frequency .......................................................................... 3.19
Fig. 3.2.3: Emitter to base impedance conversion ................................................................................ 3.20
Fig. 3.2.4: Base to emitter impedance conversion ................................................................................ 3.21
Fig. 3.2.5: Capacitive load reflects into the base with negative components ....................................... 3.23
Fig. 3.2.6: Inductive source reflects into the emitter with negative components .................................. 3.24
Fig. 3.2.7: Base to emitter VG network transformation ....................................................................... 3.26
Fig. 3.2.8: Emitter to base VG network transformation ....................................................................... 3.28
Fig. 3.2.9: Ve Ge network transformation ............................................................................................. 3.30
Fig. 3.2.10: Common collector amplifier ............................................................................................. 3.30
Fig. 3.3.1: Common base amplifier ...................................................................................................... 3.33
Fig. 3.3.2: Common base amplifier input impedance ........................................................................... 3.35
Fig. 3.4.1: Cascode amplifier circuit schematic ................................................................................... 3.37
Fig. 3.4.2: Cascode amplifier small signal model ................................................................................. 3.37
Fig. 3.4.3: Cascode amplifier parasitic resonance damping ................................................................. 3.39
Fig. 3.4.4: U# emitter input impedance with damping ......................................................................... 3.40
Fig. 3.4.5: The step response pre-shoot due to G." cross-talk .............................................................. 3.40
Fig. 3.4.6: U# compensation method with a base VG network ............................................................ 3.41
Fig. 3.4.7: U# emitter impedance compensation .................................................................................. 3.42
Fig. 3.4.8: Thermally distorted step response ....................................................................................... 3.43
Fig. 3.4.9: The optimum bias point ...................................................................................................... 3.44
Fig. 3.4.10: The thermal compensation network .................................................................................. 3.46
Fig. 3.4.11: The dynamic collector resistance and the Early voltage ................................................... 3.46
Fig. 3.4.12: The compensated cascode amplifier ................................................................................. 3.47
Fig. 3.5.1: Emitter peaking in cascode amplifiers ................................................................................ 3.50
Fig. 3.5.2: Emitter peaking pole pattern ............................................................................................... 3.53
Fig. 3.5.3: Negative input impedance compensation ............................................................................ 3.56
Fig. 3.6.1: Cascode amplifier with a T-coil interstage coupling ........................................................... 3.57
Fig. 3.6.2: T-coil loaded with the simplified input impedance ............................................................. 3.58
Fig. 3.6.3: T-coil coulpling frequency response ................................................................................... 3.62
Fig. 3.6.4: T-coil coupling phase response ........................................................................................... 3.63
Fig. 3.6.5: T-coil coupling envelope delay response ............................................................................ 3.63
Fig. 3.6.6: T-coil coupling step response ............................................................................................. 3.64
Fig. 3.6.7: Cascode input impedance including the base spread resistance .......................................... 3.65
Fig. 3.6.8: T-coil compensation for the base spread resistance ............................................................ 3.65
Fig. 3.6.9: T-coil including the base lead inductance ........................................................................... 3.66
Fig. 3.6.10: A more accurate model of <b G. ........................................................................................ 3.67
Fig. 3.6.11: The folded cascode ......................................................................................................... 3.68
Fig. 3.7.1:
Fig. 3.7.2:
Fig. 3.7.3:
Fig. 3.7.4:
-3.4-
P. Stari, E. Margan
Fig. 3.9.8: JFET step response including the signal source resistance ................................................. 3.88
Fig. 3.9.9: JFET source follower input impedance model .................................................................... 3.90
Fig. 3.9.10: Normalized negative input conductance ........................................................................... 3.91
Fig. 3.9.11: JFET negative input impedance compensation ................................................................. 3.92
Fig. 3.9.12: JFET input impedance Nyquist diagrams ......................................................................... 3.93
Fig. 3.9.13: Alternative JFET negative input impedance compensation .............................................. 3.94
List of Tables:
-3.5-
P. Stari, E. Margan
-3.7-
P. Stari, E. Margan
extension within a single stage and we shall discuss it next. This will be followed by an
analysis of JFET source follower, commonly used as the input stage of oscilloscopes
and other measuring instruments. Such a stage can have the input impedance negative
at high frequencies when the JFET source is loaded by a capacitor (which is always the
case) and we shall show how to compensate this very undesirable property.
In Part 2 we have analyzed the T-coil peaking circuit with a purely capacitive tap
to ground impedance. However, if the T-coil circuit is used for a transistor interstage
coupling the tap to ground impedance ceases to be purely capacitive. This fact requires
a new analysis, with which we shall deal in the last section.
Probably, the reader will ask how accurately we need to model the active
devices in our circuits to obtain a satisfying approximation. In 1954, Ebers and Moll
[Ref. 3.9] had already described a relatively simple non-linear model, which, over the
years, was improved by the same authors, and lastly in 1970 by Gummel and Poon [Ref.
3.10]. Modern computer programs for circuit simulation allow the user to trade
simulation speed for accuracy by selecting models with different levels of complexity
(e.g., an older version of MicroCAP [Ref. 3.30] had 3 EM and 2 GP models for the
BJT, the most complex GP2 using a total of 51 parameters). For simpler circuit analysis
we shall use the linearized high frequency EM2 model, explained in detail in Sec. 3.1.
All these models look so simple and perform so well, that it seems as if anyone
could have created them. Nothing could be farther from the truth. It takes lots of
classical physics (Boltzmanns transport theory, Gauss theorem, Poissons equation,
the charge current mean density integral calculus, the complicated pn junction
boundary conditions, Maxwells equations, ... ), as well as quantum physics (Fermi
levels, Schrdingers equation, the Pauli principle, charge generation, injection,
recombination and photonelectron and phononelectron scattering phenomena, to
name just a few important topics) to be judiciously applied in order to find well defined
special cases and clever approximations that would, within limits, provide a model
simple enough for everyday use. Of course, if pushed too far the model fails, and there
is no other way to the solution but to rework the physics neglected. In our analysis we
shall try not to go that far.
It cannot be overstressed that in our analysis we are dealing with models of
semiconductor devices! As Philip Darrington, former Wireless World editor, put it in
one of his editorials, the map is not the territory, just as the schematic diagram is not
the circuit. As in any branch of science, we build a (theoretical) model, analyze it, and
then compare with the real thing; if it fits, we have had a good nose there, or perhaps we
have simply been lucky; if it does not fit we go back to the drawing board.
In the macroscopic world, from which all our experience arises, most models are
quite simple, since the size ratio of objects, which can still be perceived directly with
our senses, to the atomic size, where some odd phenomena begin to show up, is some 6
orders of magnitude; thus the world appears to us to be smooth and continuous.
However, in the world of ever shrinking semiconductor devices we are getting ever
closer to the quantum phenomena (e.g., the dynamic range of our amplifiers is limited
by thermal noise, which is ultimately a quantum effect). But long before we approach
the quantum level we should stay alert: even if we forget that the oscilloscope probe
loads our circuit with a shunt capacitance of some 1020 pF and a serial inductance of
about 70150 nH of the ground lead, the circuit will not forget, and sometimes not
forgive, either!
-3.8-
P. Stari, E. Margan
Q1
Vcc
RL
Ib + ib
is
Rs
ib
rb
Ic + ic
Q1
is
Rs
ic
gm
rco
ro
RL
C sub
Ie + ie
r eo
ie
b)
a)
Q1
ib
is
Rs
i
C
ic
gm
rb
ic
ib
RL
gm
rb
is
Rs
Ct
RL
ie
d)
c)
Fig. 3.1.1: The common emitter amplifier: a) circuit schematic diagram the current and voltage
vectors are drawn for the npn type of transistor; b) high frequency small signal equivalent circuit;
the components included in the U" model are explained in the text; c) simplified equivalent
circuit; d) an oversimplified circuit where Gt G1 EG. constant.
During the first steps of circuit design we can usually neglect the non-linear
effects and obtain a satisfactory performance description by using a first order
approximation of the transistor model, Fig. 3.1.1c. Some of the circuit parameters can
even be estimated by an oversimplified model of Fig. 3.1.1d. By assuming a certain
-3.9-
P. Stari, E. Margan
operating point OP, set by the DC bias conditions, we can explain the meaning of the
model components. On the basis of these explanations it will become clear why and
when we may neglect some of them, thus simplifying the model and its analysis.
gm
3c
"<e
@1
and
ZT
[H] (ohm);
Me
5B X
;
[V] (volt);
<o
@ce
. Zce
ZA Zce
3c
. Mc OP
Mc
<.
@cb
. Zcb
ZA Zce
3b
. Mb OP
Mb
<. V s < b
<1
@1
. Zbe
ZT
"
" <e
"
3b
. Mb OP
Mc
gm
<b
base spread resistance (resistance between the base terminal and the base
emitter junction); value range: "! H <b #!! H.
<co
-3.10-
P. Stari, E. Margan
<eo
G.
G1
"
"
=T
#10T
Gsub
"
the transistor current gain, the collector to base current ratio; the gain
frequency dependence is modeled by the characteristic time constant 7T :
"
3c
"
"!
3b
" "! = 7T
where:
"!
Mc
Mb
-3.11-
P. Stari, E. Margan
<e
ZX
#' mV
#' mV
Me
Me
Mc
(3.1.1)
G.0
Zcb 7c
"
Zjc
(3.1.2)
This equation is valid under the assumption that there is no charge in the
collector to base depletion layer. The meaning of the symbols are:
G.0 junction capacitance [F] (farad) (when Zcb 0 V)
Zcb collector to base voltage [V] (volts)
Zjc collector to base barrier potential !'!) V for silicon transistors
7c collector voltage potential gradient factor
(!& for abrupt junctions and !$$ for graded junctions)
Obviously, G. decreases inversely with collector voltage. For small signals
(amplifier input stage) Zcb does not change very much, so we can assume G. to be
constant, or, in other words, linear. However, in middle stage and output transistors,
Zcb changes considerably. As already mentioned, in such cases the computer aided
circuit simulation is mandatory (after the initial linearized approximation has been
found acceptable). Fortunately most transistor manufacturers provide the diagrams
showing the dependence of G. from Zcb . The reader can find a very good review for 25
of the most commonly used transistors in [Ref. 3.21, p. 556].
The input base emitter capacitance G1 strongly depends on the emitter current,
respectively, on the transconductance gm . Since we can not directly access the internal
base junction to measure G1 and G. separately, we calculate G1 from the total
equivalent input capacitance Gt (see Fig. 3.1.1d), from which we first subtract G. :
G1 Gt G. gm 7T G.
-3.12-
"
G.
#10T <e
(3.1.3)
P. Stari, E. Margan
"
# 10T <e
(3.1.4)
The product <e G1 is called the transistor time constant 7T "=T , where
=T # 1 0T . The value =" =T "<e G1 represents the dominant pole of the
amplifier and thus it is the main bandwidth limiting factor. In our further discussions we
shall find the way to drastically reduce the influence of G1 , at the expense of the
amplification factor.
The next problem is to calculate the input impedance. Here we must consider
the Miller effect [Ref. 3.7, 3.12] owed to capacitance G. (in practice, there is also a CB
leads stray capacitance, parallel to the junction capacitance, that has to be taken into
account). Therefore we first calculate the input admittance looking right into the
internal <b G. junction in Fig. 3.1.1c. The current 3. flowing through G. is:
3. @1 @o G. =
(3.1.5)
(3.1.6)
(3.1.7)
(3.1.8)
From this equation it follows that the junction input impedance, owed to
capacitance G. , is a capacitance with the value:
GM " gm VL G. " Ev G.
(3.1.9)
Note the negative sign in Eq. 3.1.6: actually, the output voltage is @o Zcc 3c VL . However, since we
have agreed to replace the supply voltage with a short circuit, we are left with the negative part only.
-3.13-
P. Stari, E. Margan
When the voltage gain is large the effect of G. (and, respectively, GM ) becomes
prevalent. However, there are ways of reducing the effect of Ev G. (by lowering the
voltage gain or by using other circuit configurations); we discuss it in later sections.
The complete input impedance that the signal source would see at the base is:
^b <b
"
<1
<b
"
" =aG1 GM b <1
=G1 =GM
<1
(3.1.10)
(3.1.11)
<1
<1 Vs <b = Gt <1 aVs <b b
(3.1.12)
<1
@i
<1 Vs <b = Gt <1 aVs <b b
(3.1.13)
Ev
@o
<1
gm VL
@i
<1 Vs <b = Gt <1 aVs <b b
(3.1.14)
We would like to separate the frequency dependent part of Ev from the frequency
independent part, in a normalized form, as:
Ev = E!
="
= ="
-3.14-
(3.1.15)
P. Stari, E. Margan
To achieve this separation we first divide both the numerator and the
denominator of Eq. 3.1.14 by Gt <1 aVs <b b:
<1
Gt <1 aVs <b b
<1 Vs <b
=
Gt <1 aVs <b b
Ev gm VL
(3.1.16)
To make the two ratios equal we must multiply the numerator by a<1 Vs <b b<1
and then multiply the whole by the reciprocal:
Ev gm VL
<1
<1 Vs <b
<1 Vs <b
<1
<1 Vs <b
<1 V s < b
Gt <1 aVs <b b
<1 V s < b
=
Gt <1 aVs <b b
(3.1.17)
(3.1.18)
gm VL <1
<1 Vs <b
(3.1.19)
="
Vs <b <1
Vs <b <1 Gt
(3.1.20)
Since =" is a simple real pole it is equal to the systems upper half power frequency:
=h k=" k
Vs < b < 1
"
Vs <b <1
Gt
Vs < b < 1
"
Vs <b <1
G1 " gm VL G.
(3.1.21)
and it can be seen that it is proportional to the sum Vs <b in parallel with <1 and is
inversely proportional to G1 , G. , gm , and VL .
If Vs <b <1 and if VL is very small the approximate value of the pole is:
k=" k
gm
"
"
=T
<1 G1
G1
"o
"!
-3.15-
(3.1.22)
P. Stari, E. Margan
Before more sophisticated circuits were invented, the common emitter amplifier
was used extensively (with many amplifier designers having hard times and probably
cursing both G1 and G. ). In 1956 G. Bruun [Ref. 3.22] thoroughly analyzed this type of
amplifier with the added shuntseries inductive peaking circuit. In view of modern
wideband amplifier circuits, this reference is only of historical value today.
Nevertheless, the common emitter stage represents a good starting point for the
discussion of more efficient wideband amplifier circuits.
-3.16-
P. Stari, E. Margan
Mc
Me
(3.2.1)
!!
"!
<e
" "! <e
(3.2.2)
where "! is the common emitter DC current amplification factor. If "! " then
!! ", so the collector current Mc is almost equal to the emitter current Me and
gm "<e . This simplification is often used in practice.
ie
Q1
gm
ie
Rs
is
ic
Q1
ib
Vcc
is
RL
Rs
a)
ic
rb
ib
b)
-3.17-
RL
P. Stari, E. Margan
For the moment let us assume that the base resistance <b ! and consider the
low frequency relations. The input resistance is:
@1
<1
(3.2.3)
3b
where @1 is the base to emitter voltage. Since the emitter current is:
3e 3b 3c 3b "! 3b 3b " "!
then the base current is:
3e
" "!
(3.2.5)
@1 " "!
<e " "! "! <e
3e
(3.2.6)
3b
and consequently:
<1
(3.2.4)
The last simplification is valid if "! ". To obtain the input impedance at high
frequencies the parallel connection of G1 must be taken into account:
" "! <e
" " "! = G1 <e
(3.2.7)
@1
" " "! = G1 <e
@1
^b
" "! <e
(3.2.8)
^b
The base current is:
3b
Furthermore it is:
@1 3 b
(3.2.9)
"
"!
!!
@1
@1
" "! <e
<e
(3.2.10)
3b
3b
"
" "! <e
"!
"
"! "
"
=
<e G1
"!
"!
"
3b " =
"
= 7T
"!
(3.2.11)
In the very last expression we assumed that "! " and 7T <e G1 "=T ,
where =T #10T is the angular frequency at which the current amplification factor "
decreases to unity. The parameter 7T (and consequently =T ) depends on the internal
configuration and structure of the transistor. Fig. 3.2.2 shows the frequency dependence
of " and the equivalent current generator.
-3.18-
P. Stari, E. Margan
102
s >>
10
T
0
ic
T 1
re C
ib
s T
ib
ie
E
100
10
10
/ T
a)
10
10
b)
=T
="
"!
"!
=T
= ="
=
"!
(3.2.12)
where =" is the pole at =T "! . This relation will become useful later, when we shall
apply one of the peaking circuits (from Part 2) to the amplifier. At very high
frequencies, or if "! ", the term = 7T prevails. In this case, from Eq. 3.2.11:
3c
"
"
" =
3b
= 7T
4 = <e G1
(3.2.13)
G1
"
=T <e
This simplified relation represents the 20 dB100 asymptote in Fig. 3.2.2a.
-3.19-
(3.2.14)
P. Stari, E. Margan
Ze
sT
0
T
Zb
Zb
s T << 1
Ze
sT
0 Z e
Zb
Ze
Ze
a)
1
sT
Ze
c)
b)
Zb
( 0+ 1) Z e
d)
We know that:
^b " = ^e ^e " = " ^e
(3.2.15)
^b
^e
^e
"
= 7T
"!
"! ^ e
^e
^e
(3.2.16)
(3.2.17)
^e
^b
"= "
-3.20-
(3.2.18)
P. Stari, E. Margan
]e
(3.2.19)
" =
^b
]b
"
"
"
"
^b
= 7T
= 7T
"!
"!
(3.2.20)
^b
= 7T ^ b
"!
(3.2.21)
0
T
Ze
Ze
Zb
a)
Zb
Zb
0
Ze
b)
1
sT
Zb
s T << 1
Z b sT
c)
Zb
Ze
0+ 1
d)
-3.21-
P. Stari, E. Margan
"
= 7T "
" = "
"
"
"!
^b
"
"
"
=G
=G
= 7T = G
" = 7T
"!
!
"
"!
=G
=# 7T G
"!
= 7T "
(3.2.22)
(3.2.23)
Let us synthesize this expression by a simple continued fraction expansion [Ref. 3.27]:
=G
=G
"!
=G
"
"
= 7T "
= 7T "
"!
"!
=# 7T G
(3.2.24)
The fraction on the right is a negative admittance with the corresponding impedance:
^bw
"
"!
= 7T "
=G
7T
G
"
"!
=G
"
(3.2.25)
G1
7T
<e
G
G
(3.2.26)
"
"
"!
"!
G !! G
" "!
(3.2.27)
"
7T
"
G
= !! G
-3.22-
(3.2.28)
P. Stari, E. Margan
7T
G
7T#
!! "
=# !#!
"
7T# # #
= !!
7T#
"
4=G
=# !#!
(3.2.29)
0 C
0
T
Zb
Zb
T
C
C
a)
b)
The negative input (base) conductance Kb can cause ringing on steep signals or
even continuous oscillations if the signal source impedance has an emphasized
inductive component. We shall thoroughly discuss this effect and its compensation
later, when we shall analyze the emitterfollower (i.e., common collector) and the JFET
sourcefollower amplifiers.
Now let us derive the emitter impedance ^e in the case in which the base
impedance is inductive (= P). Here we apply Eq. 3.2.18:
^e
=P
" = "
=P
"
"
= 7T
"!
"
(3.2.30)
"
=P
= 7T
=# P 7T
"!
"!
"
"
"
= 7T
= 7T "
"!
"!
=P
(3.2.31)
=P
=P
"!
=P
"
"
= 7T "
= 7T "
"!
"!
=# P 7T
(3.2.32)
The negative part of the result can be inverted to obtain the admittance:
]ew
= 7T "
=P
"
"!
-3.23-
7T
P
"
"!
=P
"
(3.2.33)
P. Stari, E. Margan
This means we have two parallel impedances. The first one is a negative resistance:
Vx
P
7T
(3.2.34)
"!
P !! P
" "!
(3.2.35)
"
"
"!
As required by Eq. 3.2.32, we must add the inductance P in series, thus arriving at the
equivalent emitter impedance shown in the figure below:
L
T
0 T
0 L
Ze
Ze
L
a)
b)
Fig. 3.2.6: The inductive source is reflected into the emitter with negative components.
-3.24-
P. Stari, E. Margan
0
T
0 T
Zb
Ze
T
R
T R
0 R
R
R
0 L
0 L
L
T
L
T
0C
0 C
C
R
0
T
C
-3.25-
T
C
P. Stari, E. Margan
0 T
Ze
Rb
Rb
Cb
Rb
T
Cb
Rb
0
a)
Ze
0 Cb
T Rb
Ze
Cb
b)
Rb
0
Ze
Cb
if Rb Cb = 0 T
c)
Rb
0+ 1
Cb
d)
From the Table 3.2.1 we first transform the resistance Vb from base to emitter
and obtain what is shown on the left half of Fig. 3.2.7b. Then we transform the
capacitance Gb and obtain the right half of Fig. 3.2.7b. If we want the transformed
network to have the smallest possible influence in the emitter circuit, we can apply the
principle of constant resistance network (P and G cancel each other when VG PV ,
[Ref. 3.27]). To do so let us focus on both middle branches of the transformed network,
where we select such values of Vb and Gb that:
Vb 7 T
Vb
G " "
!
b !
(3.2.36)
Vb Gb 7T "!
(3.2.37)
-3.26-
P. Stari, E. Margan
With such values of Vb and Gb the middle two branches obtain the form of a
resistor with the value Vb "! , as shown in Fig. 3.2.7c, which allows us to further
simplify the complete circuit to that in Fig. 3.2.7d.
To acquire a feeling for practical values, let us make a numerical example. Our
transistor has:
"! )!
0T '!! MHz
Vb %( H
"
"
"
#'& ps
=T
# 1 0T
# 1 '!! "!'
(3.2.38)
7T "!
#'& "!"# )!
%&" pF
Vb
%(
(3.2.39)
Vb
%(
!&) H
" "!
" )!
(3.2.40)
(3.2.41)
7b
"! "
7q
(3.2.42)
(3.2.43)
We shall consider these results in the design of the common base amplifier and
of the cascode amplifier.
Example 2:
By using the same principles as we have used above, we shall take another
example, which is also very important for the wideband amplifier design. We shall
transform a parallel combination Ve Ge , as shown in Fig. 3.2.8a, from emitter to base.
With the data from Table 3.2.1, we can draw the transformed base network separately
for Ve and Ge and then connect them in parallel. This is shown in Fig. 3.2.8b. Now we
-3.27-
P. Stari, E. Margan
focus only on the middle part of the circuit, which is drawn in Fig. 3.2.8c. If we select
such values of Ve Ge that:
Ve Ge 7T
(3.2.44)
and if we consider that !0 ", then the admittance ] of the middle part of the circuit
becomes zero, because in this case:
7T
(3.2.45)
Ve
Ge
and:
7T
!! Ge Ge
(3.2.46)
Ve
and the parallel connection of a positive and an equal negative admittance gives zero
admittance:
]
"
"
"
7T
Ge
= Ge
"
Ve
7T
=
Ve
!
Ve G e 7 T
(3.2.47)
0 Re
Re
Zb
Ce
0C e
T
Re
Re
a)
Ce
b)
ReCe = T
0 = 1
Y = 0 if
T
Ce
T
Re
Re
0 Ce
Zb
( 0 +1)Re
T
Ce
Ce
d)
c)
Fig. 3.2.8: Transformation of the emitter VG network as seen from the base:
a) schematic; b) equivalent circuit; c) if VG 7T and !! ", the sum of
admitances of the components in the middle is zero; d) final equivalent circuit.
-3.28-
P. Stari, E. Margan
The transformation in Fig. 3.2.8 allows us to trade the gain of a common emitter
circuit for the reduced input capacitance. Instead of G1 (large) if the emitter was
grounded, the input capacitance is the same as the capacitance Ge (small) which we
have inserted in the emitter circuit. Of course, we still have to add the base to collector
capacitance G. or Miller capacitance GM . As we will see in the Sec. 3.4, where we shall
discuss the cascode amplifier, the gain is reduced in proportion to VL Ve . Since in a
wideband amplifier stage we almost never exceed the voltage gain of ten, we can
always apply the above transformation.
For a numerical example let us use the same transistor as before ("! )!,
0T '!! MHz). According to Eq. 3.2.38 the corresponding 7T is 265 ps. Let us say that
on the basis of the desired current gain of the common-emitter stage we select an
emitter resistor Ve "!! H. What is the value of the parallel emitter capacitance Ge
which would give the input impedance according to Fig. 3.2.8d?
Since Ve Ge 7T #'& ps, it follows that:
Ge
#&' "!"#
7T
#'& pF only!
Ve
"!!
(3.2.48)
#' mV
#' H
"! mA
(3.2.49)
Without the Ve Ge network in the emitter, the base to emitter capacitance G1 would
define the bandwidth and its estimated value would be (Eq. 3.1.4):
G1
"
"
"!# pF
# 1 <e 0T
# 1 #' '!! "!'
(3.2.50)
(3.2.51)
Here, too, we have neglected the base resistance <b ; it must be added to the value above
to obtain a more accurate figure.
In Fig. 3.2.9a the transistor stage with Ve Ge is shown again and in Fig. 3.2.9b is
its small signal equivalent input circuit.
In wideband amplifiers we usually make the emitter network with Ge #! pF.
In order to match Ve Ge 7T the capacitor Ge is often made adjustable, because 7T in
commercially available transistors have rather large tolerances.
-3.29-
P. Stari, E. Margan
bJ
0
T
Re
bJ
rb
( 0 +1)R e
Ce
Ce
R e C e = T
a)
b)
With an appropriate VL in the collector (not shown in Fig. 3.2.9) we might now
calculate the (decreased) voltage amplification Ev owed to the Ve Ge network in the
emitter circuit of the common emitter stage and consider the decreased value of the
Miller capacitance GM . Since we shall not use exactly such amplifier configuration we
leave this as an exercise to the reader.
But for the application in the cascode amplifier, which we are going to discuss
in Sec. 3.4 , it is important to know the transconductance 3o @i of the amplifier with the
Ve Ge network. The corresponding circuit is drawn again in Fig. 3.2.10a and Fig. 3.2.10b
shows the equivalent small signal circuit.
C
io
ii
rb
Q1
io
gm
ii
vi
Re
Ce
Re
Ce
a)
b)
Fig. 3.2.10: Common collector amplifier: a) schematic; b) equivalent small signal circuit.
If we neglect the resistance <b and the capacitance G. the following relation is
valid for the remaining circuit:
@i 3i D1 ^e 3o ^e
(3.2.52)
where:
3o gm @1
and
@1 3i D1
therefore:
3o gm 3i D1
-3.30-
(3.2.53)
P. Stari, E. Margan
<1
" = G1 <1
and
^e
Ve
" = Ge Ve
(3.2.54)
@i
D1 ^e gm D1 ^e
(3.2.55)
(3.2.56)
The output current can be obtained by inserting Eq. 3.2.56 back into Eq. 3.2.53:
3o
gm D1 @i
D1 ^e gm D1 ^e
(3.2.57)
@i
D 1 ^ e gm D 1 ^ e
(3.2.58)
3o
"
@i
^e
"
"
"
"
gm ^e
gm D1
(3.2.59)
Now we insert the expressions for D1 and ^e from Eq. 3.2.54 and replace gm by "<e :
3o
"
@i
Ve
" = Ge Ve
<e
<e
a" = Ge Ve b
a" = G1 <1 b "
Ve
<1
(3.2.60)
@i
Ve
" = Ge Ve
<e
<e
" = aGe G1 b <e
Ve
<1
(3.2.61)
Because a<e Ve b " and a<e <1 b " we can neglect them, so:
3o
"
" = Ge Ve
@i
Ve " = aGe G1 b <e
(3.2.62)
Ge Ve G1 <e
(3.2.63)
-3.31-
P. Stari, E. Margan
@i
Ve
(3.2.64)
Here we must not forget that at the beginning of our analysis we have neglected
the resistance <b , which, together with the transformed input capacitance Ge and the
collector to base capacitance G. , makes a pole at:
=" "Ge G. <b
(3.2.65)
The magnitude of =" is equal to the upper half power frequency: l=" l =h . This
pole makes the stage frequency dependent in spite of Eq. 3.2.64. We have also
neglected the input resistance "! Ve , but, since it is much larger than <b , we shall not
consider its influence (with it, the bandwidth would increase slightly). By introducing
the pole =" back into Eq. 3.2.64, we obtain a more accurate expression for the
transadmittance:
3o
"
@i
Ve
"
Ge G. <b
"
=
Ge G. <b
-3.32-
(3.2.66)
P. Stari, E. Margan
ie
ie
ic
Q1
is
Rs
Rs
is
ib
RL
ic
o
i
rb
RL
ib
a)
b)
Fig. 3.3.1: Common base amplifier: a) schematic; b) equivalent small signal model.
The main characteristics of the common base stage are a very low input
impedance, a very high output impedance, the current amplification factor !0 ", and,
with the correct value of the loading resistor VL , the possibility of achieving higher
bandwidths. The last property is owed to a near elimination of the Miller effect, since
G. is now grounded and does not affect the input Ev times. Thus G. is effectively in
parallel with the loading resistor VL and because we can make the time constant
VL G. relatively small the bandwidth of the stage may be correspondingly large.
Another very useful property of the common base stage is that the collector to
base breakdown voltage Zcbo is highest when the base is connected to ground and the
higher reverse voltage reduces G. further (Eq. 3.1.2). Owing to all the listed properties
the common base stage is used almost exclusively for wideband amplifier stages where
large output signals are expected.
Following the current directions in Fig. 3.3.1, the input emitter current is:
@1
3e
gm @1
(3.3.1)
D1
where:
D1
<1
" = G1 <1
-3.33-
(3.3.2)
P. Stari, E. Margan
"
= G1
<1
(3.3.3)
(3.3.4)
3e
!!
gm
gm
"
" = G1 <e
gm = G1
gm
= G1
<1
(3.3.5)
since gm "<e and !! "! "! " ". This equation has the pole at "G1 <e ,
which lies extremely high in frequency, because <e is normally very low. Since the
output pole "VL G. , which we shall consider next, becomes prevalent we can neglect
= G1 <e and assume that 3c 3e . The output voltage is:
@o 3c ^L 3c
VL
" = G. VL
(3.3.6)
With the simplifications considered above we can write the expression for the
transimpedance, which is:
@o
VL
3e
" = G. VL
(3.3.7)
]e
"
<b
"
<b
= 7T <b
"!
(3.3.8)
Within the frequency range of interest the value <b "! in the second fraction is
small and can be neglected. The simplified input admittance is thus:
]e
"
"
<b
= 7T <b
-3.34-
(3.3.9)
P. Stari, E. Margan
T rb
Ze
rb
Z'e
rb
rb
T rb
0
a)
b)
Fig. 3.3.2: Common base amplifier input impedance: a) <b transformed to ^e ;
b) within the frequency range of interest, <b "! can be neglected.
Ve <b
(3.3.10)
Pe <b 7T
(3.3.11)
Normally, if the amplifier is built with discrete components, there is always some lead
inductance Ps which must be added in series in order to obtain the total impedance.
In the next section, where we shall discuss the cascode amplifier, we shall find
that the inductance Pe , together with the capacitance G. of the common emitter current
driving stage, forms a parallel resonant circuit which may cause ringing in the
amplification of steep signals. In most cases the resistance Ve is too large to damp the
ringing effectively enough by itself, so additional citcuitry will be required.
Eq. 3.3.10 and Eq. 3.3.11 , respectively, disclose the fact that the annoying
inductance Pe and the resistance Ve are directly proportional to the base spread
resistance <b . When using this type of amplifier for the output stages, where the
amplitudes are large (e.g., in oscilloscopes), we must use more powerful transistors,
mostly in TO5 case type. In this case the internal transistor connections are relatively
long and its total active area is large, the corresponding <b is large as well. In order to
decrease Ve and Pe we must select transistors which have low <b . To decrease base
spread resistance as much as possible and also to decrease the transition time (the time
needed by the current carriers to pass the base width), the firm RCA has developed
(already in the late 1960s) the so called overlay transistor. A typical overlay transistor
is the 2N3866. Such transistors are essentially integrated circuits, composed of many
identical small transistors connected in parallel.
-3.35-
P. Stari, E. Margan
io2
Q2
Q1
R e1
Rs
Ce1
RL
V b2
V c2
V e1
C1
rb1
b1
r 1
e1
R e1
c1
C 1
2 gm2
io1 = ii2
1 gm1
e2
r 2
io2
c2
C 2
C 2
RL
r b2
Ce1
b2
Fig. 3.4.2: Equivalent small signal model of a cascode amplifier. The components
belonging to the common emitter circuit bear the index 1 and those of the common
base circuit bear the index 2.
-3.37-
P. Stari, E. Margan
have Eq. 3.3.7. We only need to multiply these two equations to get the voltage gain of
our cascode amplifier:
Ev
3o"
@o
@o
"
"
VL
@i
3o"
@i
Ve1 " = Ge" <b"
" = G.# VL
(3.4.1)
Here we have approximated !!# ", and therefore 3o# 3o" . The first fraction,
multiplied by VL from the third fraction is the DC voltage amplification and the
remainder represents the frequency dependence:
Ev
VL
"
(3.4.2)
Obviously, the frequency dependence is a second-order function. There are two poles:
the pole at the input is " Ge" <b" "7T" whilst the pole "G.# V L "7T# is
on the output side. As we shall see later, it is possible to apply the peaking technique on
both sides.
In an ideal case the common base stage input (emitter) impedance is very low.
Because of this low load the first stage voltage gain Ev" ", so G." would not be
amplified by it. And if we could neglect <b# the capacitance G.# would appear in
parallel to the loading resistor VL , and therefore it would neither be multiplied by the
second stages voltage gain Ev# . Both G." and G.# are relatively small, so it is obvious
that the cascode amplifier has, potentially, a much greater bandwidth in comparison
with a simple common emitter amplifier (for the same total voltage gain). The price we
pay for this improvement is the additional transistor U# .
Of course, in practice things are not so simple, and in addition we should not
neglect the inevitable stray capacitances. Those should be added to G." and G.# . Also,
Vs <b" with G." will affect the behavior of U" and <b# with G.# will affect the
behavior of U# , as we shall see in the following analysis.
3.4.2 Damping of the U# Emitter
Owing to the base spread resistance <b# the U# input (emitter) has an inductive
component with the inductance Pe# 7T# <b# in parallel with <b# , as already shown in
Table 3.2.1 and also in Fig. 3.3.2. The equivalent input impedance of the transistor U#
was derived in Eq. 3.3.9 to Eq. 3.3.11.
As shown in the simplified circuit in Fig. 3.4.3, the inductance Pe# and the
collector to base capacitance G." of U" form a series resonant circuit, damped by <b# in
parallel with Pe# (and a series emitter resistance <b# "# , which is very small, so it was
neglected). The other end of G." is connected to the base of U" , where we must
consider two effects:
at very high frequencies the input signal goes directly through <b" and G." ;
at somewhat lower frequencies, the signal is inverted and amplified by U" and
the internal base junction can then be treated as the virtual ground.
-3.38-
P. Stari, E. Margan
i b1 B
bJ1
1
r b1
Cd
C 1
Rd
C1
Rs
i e2
Z e2
1 gm1
E2
rb2
L e2 =
T2 rb2
E1
Fig. 3.4.3: Parasitic resonance damping of the cascode amplifier. Two current paths must
be considered: at highest frequencies, for 3b" , G." represents a non-inverting cross-talk
path; at lower frequencies, for 3c" , G." provides a negative feedback loop, thus it can be
viewed as if being connected to a virtual ground (U" base junction bJ" ). The parasitic
resonance, formed by G." and Pe# is only partially damped by <b# ; the required additional
damping is provided by inserting Vd and Gd between U" collector and U# emitter.
So let us put:
Vd <b#
Pe#
Gd
(3.4.3)
Gd
7T#
"
<b#
#10T# <b#
(3.4.4)
To get a feeling for actual values let us have two equal transistors with parameters such
as in the examples in Sec. 3.2.4 (0T '!! MHz, <b# %( H):
Vd <b2 %( H
Gd
"
&' pF
#1 '!! "!' %(
(3.4.5)
(3.4.6)
-3.39-
P. Stari, E. Margan
circuit of transistor U# , and as we shall see later, it can be a good choice for providing
the thermal stabilization of the cascode stage.
Since we have introduced ^d into the collector circuit of U" we must now
account for the U" Miller capacitance:
^e#
<b#
G." "
^e"
Ve"
(3.4.7)
where ^e# is the U# compensated emitter input impedance and ^e" is the impedance of
the emitter circuit of U" . With this consideration the gain, Eq. 3.4.2, becomes:
Ev
VL
"
(3.4.8)
Cd
i c1
i e2HF
i e2LF
C1
E2
Rd
Z'e2 = r b2
when
L
R d C d = r e2
CM1
B1
(virtual
ground)
Z e2
L e2 =
T2 rb2
rb2
b2
The collector to base capacitance of the transistor U" allows very high frequency
signals from the input bypassing this transistor and directly flowing into the emitter of
transistor U# . Transistor U" amplifies, inverts, and delays the low frequency signals. In
contrast, all of what comes through G." is non-delayed, non-amplified, and noninverted, causing a pre-shoot [Ref. 3.1] in the step response, as shown in Fig. 3.4.5. The
U" collector current, 3c" , is the sum of 3." and @1" gm" . Note that both the pre-shoot
owed to 3." and the overshoot of @1" gm" are reduced in @c# by the U# pole due to G.# .
i C 1
1 gm1 Ze2
Pre-shoot
i
c2
t [ns]
Fig. 3.4.5: The step response @c# has a pre-shoot owed to the signal cross-talk
through G." (arbitrary vertical units, but corresponding to Ev # ).
-3.40-
P. Stari, E. Margan
So far we have excluded G.# from our analysis. When included, its effect on
bandwidth is severe, owing to the Miller effect and <b# . But it also affects the emitter
input impedance of U# since GM# G.# a" Ev b appears in parallel to <b# and is
consequently transformed into the emitter in accordance with Table 3.2.1, in the same
way as in Fig. 3.2.7. If Ev is relatively high the pronounced resonance owed to G.# can
cause long ringing, even if the bandwidth is lower than the resonant frequency.
Instead of using increased damping in series with the emitter of U# , which
would reduce the bandwidth further, John Addis [Ref. 3.26] suggested an alternative
approach. The U# base, instead of being connected directly to a low impedance bias
voltage, should be connected to it through a resistor VA of up to "!! H and grounded by
a small capacitor GA G.# . In Fig. 3.4.6 the voltage gain, the phase, and the group
delay as functions of frequency are shown for the two cases: VA ! and VA $$ H,
respectively The change of U# input impedance is exposed by the lower drive stage
current 3c" near the cut off frequency.
20
180
[ ]
e> 0
10
c2
b1
c2
b1
[dB]
0
270
RL
i c1
Q2
10
CA
c2
RA
a) 0
b) 33
20
0.5
e [ns]
360 0.0
a
e
0.5
b
b
V b2
a
a
i c1
1.0
450 1.0
30
20
i c1
[mA]
10
30
10 6
10 7
10 8
f [Hz]
10 9
1010
Fig. 3.4.6: The compensation method of U# as suggested by John Addis. a) With VA ! , the
frequency response has a notch at the resonance and a phase-reversed cross-talk, which makes the
group delay 7e positive. b) with VA $$ H and GA $ pF the frequency response, the phase and
the group delay are smoothed and, although the bandwidth is reduced slightly, the group delay
linearity is extended and the undesirable positive region is reduced to negative. The U# emitter
impedance is increased, as can be deduced from the lower 3c" peak.
-3.41-
(3.4.9)
P. Stari, E. Margan
"
"
"
"
=GM#
=GA
<b#
VA
(3.4.10)
#VA
Vs
" =GA VA
" =Gs Vs
(3.4.11)
Q2
r b2
RL
RL
c2
C 2
Vcc
Vcc
RL
i c1
Q2
r b2
c2
i c1
Q2
c2
C M2
Rs
CA
RA
CA
Cs
RA
Vb2
a)
b)
c)
5B X
; Me
(3.4.12)
-3.42-
P. Stari, E. Margan
When we apply the bias and supply voltage to a transistor, the collector current
Mc will cause an increase in the temperature X of the transistor, owing to the power
dissipated in the transistor:
TD Mc Zce
(3.4.13)
where Zce is the collector to emitter voltage. If the ambient temperature is XA and the
total thermal resistance from junction to ambient is KJA [KW] (kelvin per watt), the
junction temperature XJ will be:
XJ XA KJA TD
(3.4.14)
r e ( T ) increase
r e ( T ) decrease
10
20
t [ s]
30
40
Fig. 3.4.8: Thermal distortion can cause long term drift after the transient
(exaggerated, but not too much !). In addition to this thermal time constant, there can
also be a much slower one, owed to the change in the transistor case temperature.
-3.43-
P. Stari, E. Margan
If we go back to Eq. 3.2.61 for a moment, we note that the gain depends also on
the ratio <e Ve (in the denominator), which we have assumed to be much smaller than 1
and thus neglected. But if Ve is small the change in temperature and emitter current can
alter the gain up to a few percent in worst cases.
Before we look for the remedy for the problem of how to cancel, or at least how
to substantially reduce, the thermal distortion, let us take a look at Fig. 3.4.9, which
shows a simple common emitter stage, and the way in which the power dissipation is
shared between the load and the transistor as a function of the collector current. As
usual, we use capital letters for the applied DC voltages, loading resistor, etc. and small
letters for the instantaneous signal voltages and power dissipation. The instantaneous
transistor power dissipation is:
:U" @ce 3c @ce
@L
Zcc @ce
Zcc
@#
@ce
@ce
ce
VL
VL
VL
VL
(3.4.15)
Since @ce cannot exceed Zcc if the collector load is purely resistive, the right
vertical axis ends at Zcc . The left vertical axis is normalized to the maximum load
power, which is simply :Lmax Zcc# VL (corresponding to @ce ! and thus :Q1 !).
The transistors power dissipation, however, follows an inverse parabolic function with
a maximum at:
#
Zcc
"
Zcc#
(3.4.16)
TU"max
#
VL
% VL
where @ce Zcc #. This point is the optimum bias for a transistor stage. If excited by
small signals, the transistor power dissipation moves either left or right, close to the top
of the parabola and thus it does not change very much. This means that the transistors
temperature does not change very much either.
Vcc
1.00
ce
p 0.75
p Lmax
V cc
>
Vcc
2
RL
pL
ic
0.50
ib
Q1
Vcc
ce
ce = L
ce
0.25
V
< 2cc
Q1
0.2
0.4
ic / icmax
0.6
0.8
0
1.0
Fig. 3.4.9: The optimum bias point is when the voltage across the load is
equal to the voltage across the transistor. This is optimal both in the sense of
thermal stability, as well as in the available signal range sense.
-3.44-
P. Stari, E. Margan
If different design conditions were forcing us to move the bias point far from the
top of the parabola, the bias with Zce Zcc # is preferred to the range Zce Zcc #,
because the latter situation is unstable. However, in wideband amplifiers we can hardly
avoid it, because we want to have a low VL , a high Mc and a high Zcb (to reduce G. ) and
all three conditions are required for high bandwidth.
The typical temperature coefficient of a baseemitter pn junction voltage
( !' V) is approximately # mVK for silicon transistors, so we can explain the
instability in the following way:
When the circuit is powered up the transistor conducts a certain collector
current, which heats the transistor, increasing the transistor baseemitter pn junction
temperature. If the base is biased from a voltage source (low impedance, which in
wideband amplifiers is usually the case), the temperature increase will, owing to the
negative temperature coefficient, decrease the baseemitter voltage. In turn, the base
current increases and, consequently, both the emitter and the collector current increase,
which further increases the dissipation and the junction temperature. The load voltage
drop would also increase with current and therefore reduce the collector voltage, thus
lowering the transistor power dissipation. But with a low VL , the change in the drop of
load voltage will be small, so the transistor power dissipation increase will be reduced
only slightly.
The effect described is cumulative; it may even lead to a thermal runaway and
the consequent destruction of the transistor if the top of the parabola exceeds the
maximum permissible power dissipation of the transistor (which is often the case, since
we want low VL and high Zcc and Me , as stated above). In a similar way, on the basis of
the # mVK temperature dependence of Zbe , the reader can understand why the bias
point for Zce Zcc # is thermally stable.
According to Eq. 3.4.16, to have the transistor thermally stable means having
resistances VL or VL Ve such that at the bias point the voltage drop across them is
equal to Zcc # . In general, this principle is successfully applied in differential
amplifiers: when one transistor is excited so that its bias point is pushed to one side of
the parabola the bias point of the other transistor is moved exactly to the same
dissipation on the opposite side of the parabola. As a result the temperature becomes
lower but equal in both transistors. Thus in the differential amplifier both transistors can
always have the same temperature, independent of the excitation (provided that we
remain within the linear range of excitation) and there is (ideally) no reason for a
thermal drift in the differential output signal (in practice, it is difficult to make two
transistors with similar parameter values, let alone equal values, even within an
integrated circuit).
In our cascode circuit of Fig. 3.4.1 the transistor U" already has an emitter
resistor Ve" as dictated by the required current gain, and we do not want to change it.
However, we can add a resistor, which we label V) , in the collector circuit of U" to
make Zce" Mc" Ve" V) Zcc" #, where Zcc" is the voltage at the emitter of the
transistor U# . Suppose now that the emitter current is Me" Mc" &! mA and the U#
base voltage Zbb "& V. Then the collector voltage of transistor U" is:
Zcc" Zbb Zbe# "& !' "%% V
where Zbe# is the baseemitter voltage (about !' V for a silicon transistor).
-3.45-
(3.4.17)
P. Stari, E. Margan
"#% H
Mc"
!!&
(3.4.18)
r c1
Cd
Rd
R - Rd
Q2
C 1
base of Q 1
(virtual ground)
The question is how does one calculate the proper value of G) ? The obvious
way would be to calculate the thermal capacity of the transistors die and case mass
(adding an eventual heat sink) and all the thermal resistances (junction to case, case to
heath sink, heath sink to air), as is usually done for high power output stages.
Bruce Hofer [Ref.3.8] suggests the following more elegant procedure,
based on Fig. 3.4.10. The two larger time constants in this figure must be equal:
V) Vd G) <c" GM"
(3.4.19)
Here <c" is the dynamic collector resistance of transistor U" , derived from Fig. 3.4.11 as
?Zce ?Mc . In this figure ZA is the Early voltage:
I
Ic
Ib5
Ic
Ib4
Ib3
Ib2
Ib1
V
0
VA
Vce
Vce
-3.46-
P. Stari, E. Margan
The meaning of the Early voltage can be derived from Fig. 3.4.11, where
several curves of the collector current Mc vs. collector to emitter voltage Zce are drawn,
with the base current Mb as the parameter. With increasing collector voltage the
collector current increases even if the base current is kept constant. This is because
the collector to base depleted area widens on the account of the (active) base width as
the collector voltage increases. This in turn causes the diffusion gradient of the
current carriers in the base to increase, hence the increased collector current.
By extending the lines of the collector current characteristics back, as shown in
Fig. 3.4.11, all the lines intersect the abscissa at the same virtual voltage point ZA
(negative for npn transistors), called the Early voltage (after J.M. Early, [Ref. 3.11]);
from the similarity of triangles we can derive the collectors dynamic resistance:
J Zce
Zc a Z A b
J Mc
Mc
<c"
(3.4.20)
Since the voltage gain of the common emitter stage is low, GM" will be only
slightly larger than G." . If we now suppose that transistor U" has an <c" !& "!' H
and G." $ pF, the value of G) should be:
G)
<c" GM"
!& "!' $ "!"#
"*& nF
V) V d
"#% %(
(3.4.21)
i i1
Q1
Cd
R - Rd
Rd
i i2 Q
2
CL
Rs
R e1
Ce1
RA
V b2
i o2
CA
RL
V cc
V e1
-3.47-
P. Stari, E. Margan
3s
3s
3b" 3c"
3c#
where:
3b"
Vs
3s
Vs ^i
(3.5.1)
3c"
" =
3b"
and
"
"
= 7T"
"!
(3.5.2)
with ^i being the input impedance looking into the base of transistor U" and Vs the
source resistance. At higher frequencies, when the input capacitance of transistor U"
prevails (see Eq. 3.2.12 and [Ref.3.1]), we have:
3c"
"
3b"
= 7T"
and
7T "
"
#10T"
(3.5.3)
@o
VL
VL
3c#
" = VL Go
" = 7L
and
(3.5.4)
"
" = 7T"
" ^e"
^e"
= 7T"
= 7T"
(3.5.5)
-3.49-
(3.5.6)
P. Stari, E. Margan
^e"
Vcc
log | Z i |
RL
i c2
Vb2
| Zi |
(0 +1) R e1
1
| sC i |
Q1
b J1
Rc
CL
i c1
Zi
i b1
Zi
(3.5.7)
Ci
Cc
c)
is
R e1
Rs
Ce1
C
log
L
a)
b)
T1
By introducing Eq. 3.5.7 back into Eq. 3.5.5 the input impedance can be expressed as:
^i
" = 7T"
Ve" " = 7R
Ve" " = 7R
= 7T"
" = 7T" " = 7L
= 7T" " = 7L
(3.5.8)
Now we put Eq. 3.5.2 and Eq. 3.5.8 into Eq. 3.5.1:
@o
3s
Vs V L
Ve" " = 7R
Vs
= 7T" " =7L
= 7T" " = 7L
Vs V L
=# Vs 7T" 7L = Vs 7T" Ve" 7R Ve"
(3.5.9)
Next we put the denominator into the canonical form and equate it to zero:
=# =
Vs 7T" Ve" 7R
Ve"
=# + = , !
Vs 7T" 7L
Vs 7T" 7L
(3.5.10)
Vs 7T" Ve" 7R
Vs 7T" 7L
and
-3.50-
Ve"
Vs 7T" 7L
(3.5.11)
P. Stari, E. Margan
+
+#
,
#
%
(3.5.12)
An efficient peaking must have complex poles, so the expression under the
square root must be negative, therefore: , +# %. We can then extract the negative
sign as the imaginary unit and write Eq. 3.5.12 in the form:
=",#
+
+#
4 ,
#
%
(3.5.13)
From Eq. 3.5.13 we can calculate the tangent section of the pole angle ):
+#
,
e ="
%
tan )
+
d ="
#
It follows that:
" tan# )
%,
"
+#
%,
+#
(3.5.14)
(3.5.15)
Now we insert the expressions from Eq. 3.5.11 for + and , and obtain:
Ve"
% Ve" Vs 7T" 7L
V
s 7T" 7L
" tan# )
#
aVs 7T" Ve" 7R b#
Vs 7T" Ve" 7R
Vs 7T" 7L
%
(3.5.16)
(3.5.17)
(3.5.18)
"
= Ge"
Ve"
"
V
"
=G
-3.51-
(3.5.19)
P. Stari, E. Margan
The emitter impedance ^e" is the inverse value of ]e" and it must be equal to Eq. 3.5.7:
^e"
=#
Ve" " = GV
GV Ge" Ve" = GV GVe" Ge" Ve" "
=#
Ve" " = 7R
7T" 7L = 7T" 7L "
(3.5.20)
(3.5.21)
(3.5.22)
and:
The value of Ve" is constrained by the DC current amplification VL Ve" . Thus we need
the expressions for G , Ge" , and V. By using Eq. 3.5.18, 3.5.21, and 3.5.22 we obtain:
7T" 7L
(3.5.23)
Ge"
Ve" 7R
and:
7L
7T" 7L 7R 7T"
7R
(3.5.24)
G
Ve
where 7R should be calculated by Eq. 3.5.18. Once the value of G is known we can
easily calculate the value of the resistor V 7R G . Of course, 7R is determined by the
angle ) of the poles selected for the specified type of response.
Fig. 3.5.2a and 3.5.2b show the normalized pole loci in the complex plane. As
seen already in examples in Part 1 and Part 2, to achieve the maximally flat envelope
delay response (MFED), a single stage 2nd -order function must have the pole angle
) "&!. The original circuit has two real poles =T" and =L , but when the emitter
peaking zero =R is brought close to =L the poles form a complex conjugate pair.
The frequency response is altered as shown in Fig. 3.5.2c and the bandwidth is
extended. The emitter current increase 3e" a=bMe" owing to the introduced VG network
has two break points: the lower is owed to Ve" aG Ge" b and the upper is owed to VG .
If the break point at =R is brought exactly over =L they cancel each other, and the final
response is shaped by the break point =T" of the transistor and the second break point in
the emitter peaking network, =C . The peaking can thus be adjusted by V and G .
Let us consider an example with these data: 0T" #!!! MHz, Vs '! H,
Ve #! H, Go * pF, VL $*! H. We want to make the amplifier with such an
emitter peaking network which will suit the Bessel pole loci (MFED), where the pole
angle ) "&! . First we calculate both time constants:
7T"
"
"
(*&) ps
#10T"
#1 #!!! "!'
(3.5.25)
and:
7L VL Go $*! * "!"# $&" ns
-3.52-
(3.5.26)
P. Stari, E. Margan
1.5
1.5
1.0
1.0
0.5
0.5
j 0
s T1
sR
sL
s1
j 1
j 0
0.5
0.5
1.0
1.0
1.5
3.0 2.5 2.0 1.5 1.0 0.5
a)
log | F ( ) |
s2
1.5
1.5 1.0 0.5
b)
0.5
1.0
1.5
FP ( )
F( )
log
i e1 ( )
I e1(DC)
L
c)
T1
Fig. 3.5.2: Emitter peaking: a) two real poles travel towards each other when the emitter
network zero goes from _ towards =L , forming eventually a complex conjugate pair; b)
poles for Eq. 3.5.14 for the 2nd -order Bessel (MFED) response. c) frequency response
asymptotes the badwidth is extended to =T" if =R =L .
Vs 7T" 7L
Vs 7T"
(3.5.27)
Ge"
"!$& pF
Ve" 7R
#! "$& "!*
7T" 7L 7R 7T"
Ve
7L
7R
-3.53-
(3.5.28)
P. Stari, E. Margan
$&" "!*
"$& "!*
"!"' pF
(3.5.29)
Finally we calculate the value of the resistor V , from the time constant 7R :
V
"$& "!*
7R
"$#* H
G
"!"' "!"#
(3.5.30)
Ve" " = 7R
= 7R Ve" Ve"
#
= 7T" " = 7L
= 7T" 7L = 7T"
(3.5.31)
Ve" 7R
= 7 T" 7 L
Gi
7T" 7L
Ve" 7R
(3.5.32)
Our objective is to keep such an input impedance (at the base-emitter junction of
U" ) at lower frequencies also. In other words, at lower frequencies the plot of the input
impedance should correspond to the l"=Gi l line in Fig. 3.5.1b . All other impedances
that appear in the input circuit should be canceled by an appropriate compensating
network. To find these impedances, we perform a continued fraction expansion
synthesis of the input admittance ]i as derived from the right side of Eq. 3.5.8. Thus:
7T" 7L
= 7T" =
"
=# 7T" 7L = 7T"
7T" 7L
7R
]i
=
(3.5.33)
^i
= 7R Ve" Ve"
V e " 7R
= 7R Ve" Ve"
The first fraction we recognize to be the input admittance = Gi . The second fraction can
be inverted and, by canceling out =, we obtain the impedance:
^iw
= 7R Ve" Ve"
Ve" 7R#
Ve" 7R
7L
7
7
7
7
=
T" R
L
T" 7R 7L
= 7T" "
7R
Vc
"
= Gc
(3.5.34)
-3.54-
P. Stari, E. Margan
This means a resistor Vc and a capacitor Gc connected in series, and this
combination is in parallel with the input capacitance Gi . The values are negative
because 7R 7L as was required in Eq. 3.5.6. On the basis of these results we can draw
the equivalent input impedance circuit corresponding to Fig. 3.5.1c. The expression for
the capacitance Gc is:
7T" 7L 7R
(3.5.35)
Gc
Ve" 7R
From Eq. 3.5.33, as well as from our previous analysis, we can derive that Vc Gc 7R
and obtain a simpler expression for Vc :
7R
Vc
(3.5.36)
Gc
Let us now continue our example of the emitter peaking cascode amplifier with
the data Ve" #! H, 7T" (*&) ps, 7R "&" ns, and 7L $&" ns, and calculate the
values of Gi , Gc , and Vc . The input capacitance Gi , without GM , is:
Gi
7T" 7L
(*&) "!"# $&" "!*
"!$& pF
Ve" 7R
#! "$& "!*
(3.5.37)
'$( pF
Ve" 7R
#! "$& "!*
(3.5.38)
Vc
7R
"$& "!*
$)& H
Gc
$&" "!"#
(3.5.39)
The next step is to compensate the series connected Gc and Vc . This can be
done by connecting in parallel an equal combination with positive elements. The
admittance of such a combination is zero and thus the impedance becomes infinity. The
mathematical proof for this operation is:
]i w
"
"
Vc
= Gc
and:
^iw
"
Vc
"
= Gc
"
_
]i w
(3.5.40)
By doing so, only the input capacitances Gi GM and the input resistance
a" "bVe" remain effective at the (junction) input.
The impedance ^i as given by Eq. 3.5.8 is effective between the baseemitter
junction and the ground. Unfortunately, no direct access is possible to the junction,
because from there to the base terminal we have the base spread resistance <b" . This
means that <b" must be subtracted from Vc to get the proper value of the compensating
resistor. Supposing that <b" #& H, the proper compensating resistor is simply:
Vcw Vc <b" $)& #& $'! H
-3.55-
(3.5.41)
P. Stari, E. Margan
The complete input circuit is shown in Fig. 3.5.3; the input impedance
components which are reflected from the emittor to base, are within the box.
i i1
is
b J1
r b1
R c r b1
R c
Rs
Cc
C 1
Q1
R
Ci = Ce1
Cc
(1+ )R e1
R e1
Ce1
C
Fig. 3.5.3: The impedances in the emitter are reflected into the base junction of U" .
The emitter peaking components V and G are reflected into negative elements Vc
and Gc , which must be compensated by adding externally an equal and positive Vand Gc ; for proper compensation <b" must be subtracted from Vc .
-3.56-
P. Stari, E. Margan
i c2
Q2
ii
Cb k
C
rb
RL
rb
Q1
Ce
RL
ii
i c1
bJ
(1+ ) R e
Re
a)
Co = Ce + CM
b)
Fig. 3.6.1: a) The cascode amplifier with a T-coil interstage network. b) The
T-coil loaded by the equivalent small signal, high frequency input impedance.
(3.6.1)
Since the input shunt resistance a" "bVe is usually much higher than <b , we
will neglect it also, thus arriving at the circuit in Fig. 3.6.2a . Fig. 3.6.2b shows the
equivalent T-coil circuit in which we have replaced the magnetic field coupling factor 5
with the negative mutual inductance PM and the coil branches by their equivalent
inductances Pa and Pb . Finally, in Fig. 3.6.2c we have replaced the branch impedances
by symbols A to E to determine the three current loops M" , M# , M$ .
-3.57-
P. Stari, E. Margan
Cb
Cb
ii
La
ii
Lb
L M
k=0
RL
RL
rb
Co
rb
bJ
Co
a)
bJ
is
I3
C
I2
I1
D
c)
b)
Fig. 3.6.2: a) The T-coil loaded by the simplified input impedance; b) the equivalent T-coil circuit
in which 5 is substituted by PM ; c) the equivalent branch impedances and the three current loops.
By comparing Fig. 2.4.1b,c, Sec. 2.4 with Fig. 3.6.2b,c we see that they are
almost equal, except that in the branch D, we have the additional series resistance <b .
Let us list all these impedances again, but now including <b :
A
"
= Gb
B =Pa
C =Pb
D = PM <b
"
= Go
E VL
(3.6.2)
The general analysis of the branches, Eq. 2.4.6 2.4.13, showed that the input
impedance of the T-coil network is equal to its loading impedance ^i E VL . As we
shall soon see, <b between the T-coil tap and Go spoils this nice property; we shall have
to compensate it. The analysis here is similar to that in Sec. 2.4, so we do not have to
repeat it. Here we give the final result, Eq. 2.4.14, for convenience:
BCA BDA BEA DCA ECA E # A E # B E # C !
(3.6.3)
By entering all substitutions from Eq. 3.6.2, performing all the required multiplications
and arranging the terms in the decreasing powers of =, we obtain:
Pa Pb
P PM
P <b
VL
"
P
V#
Pa Pb
L !
VL# P
Gb
Gb
Gb
Gb
= Go G b
Gb
(3.6.4)
= O" O# =" O$ !
-3.58-
(3.6.5)
P. Stari, E. Margan
The difference between Eq. 3.6.4, 3.6.5 and Eq. 2.4.15, 2.4.16 is in the middle
term. Again, if we want to have an input impedance independent of the frequency =,
then each of the coefficients O" , O# , and O$ must be zero [Ref. 3.5]:
O"
P PM
Pa Pb
VL# P !
Gb
Gb
O#
P <b
VL
Pa Pb !
Gb
Gb
O$
P
V#
L !
Go Gb
Gb
(3.6.6)
So we have the three equations from which we can calculate the parameters Pa ,
Pb , and PM . By considering Eq. 3.6.1 we obtain:
Pa
P
<b
V # Go
<b
"
L
"
#
VL
#
VL
(3.6.7)
Pb
P
<b
V # Go
<b
"
L
"
#
VL
#
VL
(3.6.8)
PM
<#
<#
P
V # Go
" b# VL# Gb L
" b# VL# Gb
%
VL
%
VL
(3.6.9)
Two interesting facts become evident from Eq. 3.6.7 and 3.6.8. First, Pa Pb ,
and this means that the coil tap is not at the coils center any longer, but it is moved
towards the coils signal input node. Secondly, V L must always be larger than <b ,
otherwise Pa becomes negative. But we reach the limit of realizability long before that,
since we know from Part 2 that P" Pa PM (and also P# Pb PM ).
In Eq. 3.6.9 we have two unknowns, PM and G b ; therefore we need a fourth
equation to calculate them. Similarly as we did in Sec. 2.4, we shall use the
transimpedance equation for this purpose. The procedure is well described from
Eq. 2.4.20 to 2.4.24 and we write the last one again:
CA EA EB EC
Zo
"
CA CB DA DB DC EA EB EC
M"
= G!
(3.6.10)
If we insert the substitutions from Eq. 3.6.2, we obtain the following result:
J =
Zo
M"
VL
=# VL# Go Gb = Go
V L <b
"
#
(3.6.11)
In a similar way, for the transimpedance from the input to V L we would obtain:
V L <b
=# VL# Go G b = Go
"
ZR
#
VL
V L <b
Mi
=# VL# Go G b = Go
"
#
-3.59-
(3.6.12)
P. Stari, E. Margan
Since we have the factor V L <b in the numerator and a different factor
V L <b in the denominator, this means that the two zeros are not symmetrically
placed in relation to the two poles in the =-plane. Therefore Eq. 3.6.12 does not
describe an all pass network and the input impedance is not simply VL as before. This
represents the basic obstacle to using T-coils in a transistor distributed amplifier,
because the T-coil load can not be replaced by another T-coil network (for comparison
see [Ref. 2.18 and 2.19] ).
Eq. 3.6.11 has two poles, which we calculate from the canonical form of the
denominator:
=# =
and both poles are:
=",#
VL < b
"
#
!
# VL# Gb
VL Go Gb
(3.6.13)
VL < b
VL <b
"
#
% VL# Gb
VL Go G b
% VL# Gb
(3.6.14)
" <b VL
"' Gb
" "
% Gb V L
Go " <b VL #
(3.6.15)
An efficient inductive peaking must have complex poles. For Bessel poles, as
shown in Fig. 3.6.2, the pole angles )",# "&! and with this pole arrangement we
obtain the MFED response. If the poles are complex the tangent of the pole angle is the
ratio of the imaginary to the real component of Eq. 3.6.15:
tan )
e="
"' Gb
"
d ="
Go " <b VL #
(3.6.16)
Gb Go
" tan# )
<b #
"
"'
VL
(3.6.17)
Compared to the symmetrical T-coil, here we have the additional factor a" <b VL b# .
For Bessel poles ) "&! &1' and tan# ) "$, thus for a single stage case:
Gb
Go
<b #
"
"#
VL
(3.6.18)
If we replace Gb in Eq. 3.6.9 with Eq. 3.6.18, the mutual inductance is:
#
PM VL# Go
"
<#
"
<b
" b#
"
VL
"#
VL
%
(3.6.19)
With this we can calculate the coupling factor 5 between the coil P" and P# [Ref. 3.23]:
5
PM
PM
P" P#
Pa PM Pb PM
(3.6.20)
Now we have all the equations needed for the T-coil transistor interstage coupling.
-3.60-
P. Stari, E. Margan
<b #
VL
VL
'
<
"#
<b
b
=# =
"
# # "
VL Go
VL
VL G o
VL
(3.6.21)
"
$ 4 $ 5" 4 ="
VL Go " <b VL
(3.6.22)
Sometimes we prefer the normalized form of the roots and in this case
VL Go "=h ". To emphasize the normalization, we add the subscript n, so
=",#n 5"n 4 ="n . By applying the normalized poles of Eq. 3.6.22 to Eq. 2.2.27, which
is a generalized second-order magnitude function, we obtain:
5"#n =#"n
J =
#
#
#
=
=
#
5
=
5
=
n
n
"
"
"n
"n
=h
=h
(3.6.23)
By comparing the Bessel poles for a simple T-coil (Eq. 2.4.42) with Eq. 3.6.22,
we notice that in the denominator we have an additional factor " <b V L . Therefore
it would be interesting to make several frequency responses with different ratios <b VL ,
as listed in Table 3.6.1:
Table 3.6.1
<b VL
!!!
!#&
!&!
5" n
$!
#%
#!
="n
"($#
"$)'
""&&
Note
symmetrical T-coil
-
=",#
"
$ 4 $ =h $ 4 $
Go VL <b
(3.6.24)
-3.61-
P. Stari, E. Margan
2.0
Cb
vo
i i RL
ii
L1 L 2
1.0
0.7
rb
vo
Co
0.5
d
r b /R L Cb /C o
a) 0
b ) 0.25
c ) 0.50
0.2
0.083
0.130
0.1875
L 1 /L 2
1
0.50
0.52 0.44
0.33 0
L=0
0.0265
0.0166
0
RL
e
f
L = R L2 C o
h = 1/ Co ( R L +r b )
0.1
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 3.6.3: MFED frequency response of the T-Coil transistor interstage coupling circuit for
three different values of <b : a) <b !; b) <b !#& VL ; c) <b !& VL . For comparison the
three reference cases (P !): d), e), and f), which correspond to the same three <b VL ratios,
are drawn. The bandwidth improvement factor of the peaking system remains 2.72 times over
the non-peaking reference for each value of <b .
==h ="n
==h ="n
arctan
5"n
5"n
(3.6.25)
In Fig. 3.6.4 the phase plots for the same three ratios of <b VL as in the
frequency response are shown, along with the three references (P !).
3.6.3 Envelope Delay
The envelope delay is calculated using Eq. 2.2.35:
5"n
51n
7e =h #
#
5"n ==h =1n #
5"n ==h =1n #
(3.6.26)
and the responses are drawn in Fig. 3.6.5, for the three different ratios <b VL , in
addition to the three references (P !).
-3.62-
P. Stari, E. Margan
Cb
-30
ii
L1 L 2
-60
rb
vo
L=0
[ ]
d
f
-90
r b /R L Cb /C o
-120
a) 0
b ) 0.25
c ) 0.50
0.083
0.130
0.1875
-150
L 1 /L 2
1
0.52
0.33
k
0.50
0.44
0
0.0265
0.0166
0
Co
RL
L = R L2 C o
h = 1/ Co ( R L +r b )
-180
0.1
0.2
0.5
1.0
/ h
2.0
5.0
10.0
Fig. 3.6.4: MFED phase response of the T-Coil transistor interstage coupling circuit compared
with the references (P !), for the same three values of <b VL .
0.0
r b /R L C b /C o
a) 0
0.3 b ) 0.25
c ) 0.50
0.083
0.130
0.1875
L 1 /L 2 k
1
0.52
0.33
0.0265
0.0166
0
0.50
0.44
0
0.6
b
c
e h
0.9
Cb
Ii
L1 L 2
L=0
1.2
0.2
Vo
h = 1/ C o ( R L +r b )
f
1.5
0.1
rb
L = R L2 C o
0.5
1.0
/ h
2.0
Co
5.0
RL
10.0
Fig. 3.6.5: MFED envelope delay response of the T-Coil transistor interstage coupling
circuit compared with the references (P !) for the same three values of <b VL .
-3.63-
P. Stari, E. Margan
"
e5" > sin=" > ) 1
lsin )l
(3.6.27)
where ) is the pole angle according to Fig. 3.6.2. By inserting the pole angle ) "&!
or &1', as required by the 2nd -order Bessel system, we obtain:
&
g> " # e$ >X sin$ >X " 1
'
(3.6.28)
1.0
o
ii RL
0.8
L=0
f
0.6
T = Co ( R L +r b )
0.4
r b /R L Cb /C o
a) 0
b ) 0.25
c ) 0.50
0.2
0.0
Cb
L = R L2 C o
0.083
0.130
0.1875
t T
ii
L1 L 2
L 1 /L 2 k
1
0.52
0.33
0.0265
0.0166
0
0.50
0.44
0
rb
o
Co
4
RL
Fig. 3.6.6: MFED step-response of the T-coil transistor interstage coupling circuit, compared
to the references (P !), for the same three values of <b VL .
Thus we have completed the analysis of the basic case of a transistor T-coil
interstage coupling. The reader who would like to have more information should study
[Ref. 3.5]. In order to simplify the analysis, we have purposely neglected the transistor
input resistance " <1 " and also stray inductance Ps of the tap to transistor base
terminal. In the next steps we shall discuss both of them.
-3.64-
P. Stari, E. Margan
i i1
C 1
r b1
Ls
i i1
Q1
bj
Ci = Ce1
R e1
(1+ )R e1
Ce1
a)
b)
(3.6.29)
as we have derived in Eq. 3.2.15. The effect of this resistance may be canceled if we
insert an appropriate resistor Vs from the end of coil P" to the start of coil P# , as shown
in Fig. 3.6.8. It is essential that the resistor Vs is inserted on the left side of P# (at the
T-coil tap node), because in this case the bridging capacitance of Gb (self-capacitance of
the coil) and the magnetic field coupling (5 ) are utilized. With the resistor placed on the
right side of P# (at the VL Gb node) , that would not be the case.
Cb
L1 < L2
k
L
Ii
L1
rb
Z i = RL
when
L2
Rs
bJ
[ r b + (1+ ) Re ] ||( Rs + RL ) = RL
Vo
Co
RL
Compensates for
(1+ ) R e
Fig. 3.6.8: The resistance Vs in series with P# is inserted near the T-coil tap
to compensate the error in the impedance seen by the input current at low
frequencies, owed to the parallel connection of VL ll <b " " Ve .
At very high frequencies we can replace all capacitors by short circuit and all
inductors by open circuit. In this case the input resistance of the T-coil circuit is VL . But
-3.65-
P. Stari, E. Margan
at very low frequencies the capacitors represent an open circuit and the inductors a short
circuit. The transistor input resistance is then effectively in parallel with VL . It is the
task of the series resistor Vs to prevent this reduction of resistance. The idea is that:
<b Vi ll Vs VL VL
(3.6.30)
VL#
Vi < b V L
(3.6.31)
The introduction of this resistor spoils all the expressions from our previous
analysis, and, to be exact, everything we derived to determine the basic T-coil
parameters should be calculated anew, considering the additional parameter Vs . Since
in practice the value of this resistor is very small, no substantial changes in other circuit
parameters may be expected and the additional effort, which would be required by an
exact analysis, would be worthless.
A sneaky method for implementing this compensation, while at the same time
decreasing the stray capacitance (and also to create difficulties to the competition to
copy the circuit) is to make the coil P# using an appropriate resistive wire.
3.6.6 Consideration of the base lead stray inductance
Fig. 3.6.9a shows the T-coil with the base lead stray inductance Ps at the tap.
From Fig. 3.6.9b we realize that the positive inductance of the base lead Ps actually
decreases the negative mutual inductance PM of the T-coil. To retain the same
conditions as in Fig. 3.6.2c at the beginning of the basic T-coil analysis, the coupling
factor must be increased, thus increasing the mutual inductance to PM Ps .
Cb
Cb
L
Ii
Ii
L1
k=0
RL
Ls
Ls
Co
rb
bj
a)
RL
L M
Co
L2
rb
o
b)
Fig. 3.6.9: a) The base lead inductance Ps decreases the value of mutual
inductance, as indicated by the equivalent circuit in b). This can be
compensated by recalculating the circuit with an increased coupling factor.
We shall mark the new T-coil circuit parameters with a prime ( w ) to distinguish
them from the original parameters of the transistor interstage T-coil:
PwM PM Ps
-3.66-
(3.6.32)
P. Stari, E. Margan
Because the inductance from <b to either end of the coil P is now increased by
Ps , both inductances Pa and Pb must be decreased by the value of Ps :
Paw Pa Ps and Pwb Pb Ps
(3.6.33)
By considering all these changes, the new (larger) value of the coupling factor is
5w
PwM
Pa PM # Ps Pb PM # Ps
(3.6.34)
Cs
i i1
Q1
rb
C 3
C2
C 1
i i1
r b2
a)
r b1
Q1
b)
Those readers who are interested in the results and further suggestions, should
study [Ref. 3.20].
3.6.8 The Folded Cascode
While we are still speaking about cascode amplifiers, let us examine the folded
cascode circuit, Fig. 3.6.11. This circuit is a handy solution in cases of a limited supply
voltage, a situation commonly encountered in modern integrated circuits and battery
supplied equipment.
The first thing to note is that the collector dc currents can be different, since the
bias conditions for U" are set by the input base voltage and Ve" , while for U# the bias
is set by Zcc Zb# and Vcc .
-3.67-
P. Stari, E. Margan
Another interesting point is that Vcc (or a current source in its place) must
supply the current for both transistors. Therefore when a signal is applied at the input
the currents in U" and U# will be in anti-phase, i.e., when 3o" increases, 3i# decreases
and vice-versa. It is thus easier to achieve good thermal balance with such a circuit, than
with the original cascode.
V cc
R cc
Icc
io1
i i1
Q1
i
R e1
i i2
Q2
Ce1
V b2
RL
Fig. 3.6.11: The folded cascode is formed by a complementary, npn and pnp, transistor
pair, connected in the otherwise usual cascode configuration. Since thermionic devices are
not produced in complementary pairs, this circuit can not be realized with electronic tubes.
-3.68-
P. Stari, E. Margan
i c1
o1
Q1
i1
R L2
simmetry line
R L1
V cc
o2
Q2
i e1
Iee
2
i e2
Iee
2
2R ee
2R ee
V ee
i c2
i2
V ee
(3.7.1)
In a similar way as the input voltages, the signal output voltages @o" and @o# go
up and down for an equal amount, however, we must account for the signal inversion in
-3.69-
P. Stari, E. Margan
the common emitter amplifier. If the voltage amplification of the input voltage
difference is Evd (which we can take directly from Eq. 3.1.14, where we discussed a
simple common-base amplifier), the output signal voltage difference is:
@o# @o" @o" @o# Evd @i" @i#
(3.7.2)
An attentive reader will note that we have added the subscript d to denote the
differential mode gain.
In the case of both input voltages being equal and of the same polarity, both
output voltages will also be equal and of same polarity; however, the output signals
polarity is the inverse of the input signals polarity (owing to the 180 phase inversion
of each common emitter amplifier stage). If the symmetry of the circuit were perfect,
the output voltage difference would be zero, provided that the common mode excitation
at the input remains well within the linear range of the amplifier. Such operation is
named common-mode amplification, Evc (here we have added the subscript c).
For the common mode signal the excursion of both output voltages with respect
to their DC value is:
@o" @o# Evc
@i " @ i #
VL"
@i" @i#
#
# Vee
(3.7.3)
A good idea to visualize the common mode operation is to fold the circuit
across the symmetry line and consider it as a single ended amplifier with both
transistors and both loading resistors in parallel.
Since we are more interested in the differential mode amplification, we have
pulled the expression for the common mode amplification, so to say, out of the hat.
The analysis so far is more intuitive than exact. The reason we did not bother to make a
corresponding derivation of exact formulae is that the simple circuit shown in Fig. 3.7.1
is almost never used as a wideband amplifier owing to its large input capacitance. A
basic differential amplifier in cascode configuration, with a constant current generator
instead of the resistor Vee , is drawn in Fig. 3.7.2. The reader who wants to study the full
analysis of the basic low-frequency differential amplifier according to Fig. 3.7.1 should
look in [Ref. 3.7].
-3.70-
P. Stari, E. Margan
same reason there is still some temperature drift, although greatly reduced in
comparison with the single ended amplifier. The appearance of common mode signals
at the output is especially annoying in electrobiological amplifiers (electrocardiographs
and electroencephalographs). In these amplifiers, very small input signal differences (of
the order of several .V) must be amplified in the presence of large (up to 1V) common
mode signals from power lines, owing to capacitive pickup. The level of ability of the
differential amplifier to reject the common mode signal is called common mode
rejection ratio, CMRR Evc Evd , generally expressed in decibels (dB). Since this is
out of scope of this book, we will not pursue these effects in detail.
V cc
R L1
R L2
o2
o1
V bb
Q3
Q1
Q4
Q2
Cee
i1
i2
R e1
R e2
Iee
V ee
Fig. 3.7.2: The basic circuit of the differential cascode amplifier.
The optimum thermal stability of the differential cascode circuit could again be
obtained by adjusting the quiescent currents in both halves of the differential amplifier
to values such that the voltage drop on each loading resistor is equal to the voltage
Zcc Zbb Zbe #, (see Eq. 3.4.29 and the corresponding explanation). However, as
has been said for the simple cascode amplifier, the requirements for large bandwidth
will prevent this from being realized. We would want to have low VL , high Zcc and Zbb ,
and high Mee to maximize the bandwidth. So the thermal stability will have to be
established in a different way.
Differential amplifiers are particularly suitable for compensation of many
otherwise unsolvable errors. This is achieved by cross-coupling and adding anti-phase
signals, so that the errors cancel out. For example, the pre-shoot of the simple cascode
amplifier, which is owed to capacitive feedthrough, can be effectively eliminated if two
capacitors with the same value as G.",# are connected from the U" emitter to the U#
collector, and vice-versa.
Similarily, by cross-coupling diodes or transistors we can achieve non-linearity
cancellation, leakage current compensation, better gain stability, DC stability, etc. In
integrated circuits, even production process variations can be compensated in this way.
Some such examples are given in Part 5.
-3.71-
P. Stari, E. Margan
Vcc
Ic6
Ic5
Q5
Q6
Ib5 Ib6
+2
R +2
+1
Q5
Q6
Q5
Vee
b)
+2
+1
Q6
+2
Vee
a)
Vcc
+ +2
+1
Q7
Vee
c)
Fig. 3.7.3: a) The basic current mirror. b) Current symmetry analysis with the currents
normalized to those of the base. c) The symmetry is improved in the Wilson mirror.
Mc&
Mc'
"&
"'
(3.7.4)
If both transistors are identical, then "& "' " and Mc& Mc' " Mb . In this case:
MV Mc& "
and the collector current is:
Mc&
MV
"
#
"
#
"
Zcc Zbe
V
-3.72-
(3.7.5)
(3.7.6)
(3.7.7)
P. Stari, E. Margan
Zbe
ZX
"
Zce
ZA Zce
(3.7.8)
Eq. 3.7.8 can be written simply from the geometric relations taken from
Fig. 3.4.11. For a common silicon transistor the Early voltage is at least "!! V. Suppose
both silicon transistors in Fig. 3.7.3a are identical and subject to the same temperature
variations (on the same chip). The collectoremitter voltage of transistor U& is the same
as the baseemitter voltage, Zce& Zbe& !'& V. In contrast, the collectoremitter
voltage of transistor U' is higher, say, Zce' "& V. The ratio of both collector currents
is then:
Mc'
Mc&
Zce'
ZA Zce'
Zce&
"
ZA Zce&
"
"&
"!!
"& "#$(
!'
"
"!! !'
"
(3.7.9)
J Zce'
ZA Zce'
J Mc'
Mc'
(3.7.10)
Returning to our differential amplifier example of Fig. 3.7.2 with Zee "& V,
suppose we require a differential amplifier current Mee Me" Me# Mc' !!$ A. By
assuming the Early voltage ZA "$& V and Zce' Zee , the incremental collector
resistance is:
"$& "&
<o
& kH
(3.7.11)
!!$
If we were to replace the current generator by a simple resistor Vee <o , the voltage Zee
in Fig. 3.7.2 would have to be:
aMe" Me$ b Vee !!$ A &!!! H "&! V
-3.73-
(3.7.12)
P. Stari, E. Margan
which is 10 times more. Correspondingly, the power dissipation in the resistor Vee
would also be 10 times greater, or %& W, compared to !%& W for U' .
A high incremental resistance <o is also important for achieving high CMRR,
because it gives the differential amplifier a higher immunity to power supply voltage
variations (which is also a common mode signal).
A simple way of improving the current generator, thus achieving even greater
CMRR factors, is shown in Fig. 3.7.4, where negative feedback, provided by the U&
gain, is used to stabilize the collector current of U' and increase the incremental
resistance, whilst a low voltage zener diode (named after its inventor, the American
physicist Clarence M. Zener, 1905-1993) reduces the Zbe thermal drift of U& , owing to
an almost equal, but opposite thermal coefficient.
In this circuit any increase in U' collector current Mc' is sensed by its voltage
drop on V# , increasing Zb& , which in turn increasess Me& , thus reducing Mb' and therefore
also Mc' . The reduction feedback factor is nearly equal to the U& current gain ".
Effectively, the output resistance is increased from <o of Eq. 3.7.10 to about " <o . Note
that this circuit does not rely on identical transistor parameters, so it can be used in
discrete circuits.
Vcc
Iee
Vcc
Iee =
IR1
R1
R1
Ib6
Q6
VZ + 2 V be
Q5
DZ
R2
when
Q6
Ib5 = Ib6
Ie6
Q5
VZ + V be
VZ + V be2
R2
Ib5
DZ
IR2 =
R2
VZ + Vbe2
R2
Vee
Vee
a)
b)
Fig. 3.7.4: Improved current generator: a) voltage drops; b) current analysis.
The circuits shown and only briefly discussed here should give the reader a
starting point in current control design. Many more circuits, either simple or more
elaborate, can be found in the references quoted.
-3.74-
P. Stari, E. Margan
3s
Ve
"
"
Vs
7T"
" = 7T#
" =Vs
G.
Ve
#
(3.8.1)
C1
T2
Q3
io
T1
Q1
is
Rs / 2
V bb
Cee =
Q4
T1
Re
Re / 2
C 2
Q2
Re / 2
Rs / 2
Iee
V ee
Fig. 3.8.1: The current driven differential cascode amplifier. We assume that the transistor
pair U",# , and the pair U$,% , respectively, have identical main parameters. The emitters of
U",# see the Ve Gee network, set to equal the time constant 7T" of transistors U",# .
Vs
Ve
(3.8.2)
We can substitute this in Eq. 3.8.1 to highlight the bandwidth dependence on gain:
3o
E3
3s
"
Ve G .
" =E3 7T" E3
#
-3.75-
"
" = 7T#
(3.8.3)
P. Stari, E. Margan
This means that by reducing the gain E3 we can extend the bandwidth. On the
basis of this there arises the idea that we can add another differential stage to double the
current (and therefore also the current gain) and then optimize the stage, choosing
between the doubling of gain with the same bandwidth and the doubling of bandwidth
with the same gain, or any factor in between.
The basic 0T doubler circuit, developed by C.R. Battjes, [Ref. 3.1], is presented
in Fig. 3.8.2. Each differential pair amplifies the voltage drop on its own Vs # and each
pair sees its own Ve between the emitters, thus the current gain is simply:
3o
Vs
3s
Ve
E3
T2
Q3
C1
is
Rs / 2
"
"
Vs G.
7T"
" = 7T#
" =Vs
# Ve
#
"
"
Ve G.
7T"
" = 7T#
" = E3
#
#
io
T2
Q4
Vbb
C 2
T1
T1
Q1a
T1 Q2a
Cee =
Re
Re / 2
(3.8.4)
T1
T1
Q1b
Q
Cee = T1 2b
Re
Re / 2
Re / 2
Re / 2
Rs / 2
Iee2
Iee1
Fig. 3.8.2: The basic 0T doubler circuit. We assume equal transistors for U"a#a and U"b,#b
The low input impedance of U$% emitters allows summing the collector current of each
pair, but cross-coupled for in phase signal summing.
Gi
(3.8.5)
(3.8.6)
-3.76-
P. Stari, E. Margan
Another problem is that, although the transfer function is of second order, there
are two real poles, so we can not tune the system for an efficient peaking. By forcing
the system to have complex conjugate poles with emitter peaking, we would increase
the emitter capacitance Gee , which would be reflected into the base as an increased
input capacitance; this would increase exactly that term which we have just halved.
A quick estimate will give us a little more feeling of the improvement
achievable. Let us have a number of transistors, with 0T $& GHz, G. " pF,
Vs # &! H, U$,% collector load VL # &! H, GL " pF and the total current
gain E3 $. Assuming that the systems response is governed by a dominant pole, we
can calculate the rise time of the conventional system as:
>r ##E#3
#
Ve G . #
"
"
#
cVL aGL G. bd
#1 0T
#
#1 0T
(3.8.7)
(3.8.8)
>r# $&& ps
(3.8.9)
and the improvement factor is 1.34, much less than 2. Transistors with lower 0T might
give an apparently greater improvement (about 1.7 could be expected) owing to the
lower contribution of the sources impedance. However, it seems that a better idea
would be to remain with the original bandwidth and use the gain doubling instead,
which could lead to a system with a lower number of stages, which in turn could be
optimized more easily.
On the other hand, the reduced input capacitance is really beneficial to the
loading of the input T-coils. With the data from the example above, we can calculate
the T-coils for the conventional and the doubler system and the resulting bandwidths.
From Eq. 3.6.21 we can find that:
=H
"#
<b
# "
V
aVL Gi b
L
(3.8.10)
By assuming an <b "& H and Gi of '& and $( pF, respectively (Eq. 3.8.5 and 3.8.6),
we can calculate an 0H of "* and $% GHz, a ratio of nearly 1.8, which is worth
considering.
In principle one could use the same doubler implementation with 4, 6, or more
transistor pairs; however, the input capacitance poses a practical limit. A system with 4
pairs is already slower than the system with two pairs.
-3.77-
P. Stari, E. Margan
Cgd
Q1
g
s
ZL
Cgs
G
V dd
d
Q1
s
g Cgs
Q1
ZL
g m GS
G
Cgd
ZL
d
a)
b)
c)
Fig. 3.9.1: The JFET source follower: a) circuit schematic; b) the same
circuit, but with an ideal JFET and the inter-electrode capacitances drawn as
external components; c) equivalent circuit.
A MOSFET (metal oxide silicon field effect transistor) has even greater input
resistance (up to ~1015 H); however it also has a greater input capacitance (between 20
-3.79-
P. Stari, E. Margan
and 200 pF; it is also more noisy and more sensitive to damage by being overdriven), so
it is not suitable for a wideband amplifier input stage.
In Fig. 3.9.1b we have drawn an ideal JFET device and its inter-electrode
capacitances are modeled as external components. These capacitances determine the
response at high frequencies [Ref. 3.8, 3.16, 3.20, 3.35]. Fig. 3.9.1c shows the
equivalent circuit.
The source follower is actually the common drain circuit with a voltage gain of
nearly unity, as the name follower implies. The meaning of the circuit components is:
Ggd gatedrain capacitance; in most manufacturer data sheet it is labeled as
Grss (common source circuit reverse capacitance); values usually range
between 1 and 5 pF;
Ggs
gm
^L
The JFET drain is connected to the power supply which must be a short circuit
for the drain signal current, therefore we can connect Ggd to ground, in parallel with the
signal source. We assume the signal source impedance to be zero; so we can forget
about Ggd for a while.
From the equivalent circuit in Fig. 3.91c we find the currents for the node g:
3g @G = Ggd @G @s = Ggs
(3.9.1)
@s
^L
(3.9.2)
gm
gm
"
@s "
= Ggs
= Ggs
= Ggs ^L
(3.9.3)
@G
^L "
^L "
= Ggs
gm
= Ggs
"
gm
gm
(3.9.4)
-3.80-
P. Stari, E. Margan
have done in the differential amplifier. By doing this, we increase the real (resistive)
part of the loading impedance, but we can do little to reduce the always present loading
capacitance GL .
Cgd
Zi
V dd
d
Q1
s
Cgs
G
Is
a)
Zi
Cgd
Cgs CL
Cgs + CL
Cx =
CL
Cx =
Rx =
V ss
Cgs CL
Cgs + CL
(Cgs + CL ) 2
g m Cgs CL
b)
Fig. 3.9.2: The JFET source follower biased by a current generator and loaded only by the
inevitable stray capacitance GL : a) circuit schematic; b) the input impedance has two
negative components, owed to Ggs and gm (see Sec. 3.9.5).
In Eq. 3.9.4 the term Ggs gm is obviously the characteristic JFET time constant, 7FET :
Ggs
"
7FET
gm
=FET
(3.9.5)
@s
Ev
@G
4=
=FET
"
GL
" 4=
=FET
gm
"
(3.9.6.)
@s
@G
4=
=FET
4=
"
"
=FET
Hc
"
(3.9.7.)
Here Hc is the input to output capacitive divider, which would set the output voltage if
only the capacitances were in place:
Hc
Ggs
Ggs GL
(3.9.8.)
We would like to express Eq. 3.9.7 by its pole =" and zero =# , so we need the
normalized canonical form:
="
= =#
J a=b E!
(3.9.9)
= ="
=#
-3.81-
P. Stari, E. Margan
= a=FET b
= a=FET Hc b
(3.9.10)
=FET
"
=FET Hc
(3.9.11)
(3.9.12)
=" =FET Hc
(3.9.13)
These simple relations are the basis from which we shall calculate the frequency
response magnitude, the phase, the group delay and the step response of the JFET
source follower (simplified at first and including the neglected components later).
3.9.1 Frequency response magnitude
The frequency response magnitude is the normalized absolute value of J a=b
and we want to have the normalization in both gain and frequency. Eq. 3.9.7 is already
normalized in frequency (to =FET ) and the gain E! " To get the magnitude, we must
multiply J a4=b by its complex conjugate, J a 4=b and take the square root:
4=
"
=
FET
lJ a=bl J a4=b J a4=b
4=
"
" =
H
FET
"
=FET
lJ a=bl
#
=
"
=FET Hc
4=
=FET
4=
"
"
=FET
Hc
"
(3.9.14)
Since we want to examine the influence of loading we shall plot the transfer
function for three different values of the ratio GL Ggs : 0.5, 1.0, and 2.0 (the
corresponding values of Hc being 0.67, 0.5, and 0.33, respectively). The plots, shown
in Fig 3.9.3, have three distinct frequency regions: in the lower one, the circuit behaves
as a voltage follower, with the JFET playing an active role, so that @s @G , whilst in
the highest frequency region, only the capacitances are important; in between we have a
transition between both operating modes.
-3.82-
P. Stari, E. Margan
1.0
Source follower
0.7
b
0.5
c
Vs
VG
V dd
gm
FET =
Vs
0.2
VG
Cgs
RG = 0
CL
Is
0.1
0.01
C L / Cgs
a)
b)
c)
V ss
gm
Cgs
0.5
1.0
2.0
0.1
1.0
/ FET
10.0
100.0
Fig. 3.9.3: Magnitude of the frequency response of the JFET source follower for three
different capacitance ratios GL Ggs . The pole =$ "VG Gi has not been taken into
account here see Fig. 3.9.7 and Fig. 3.9.8
The relation for the upper cutoff frequency is very interesting. If we set lJ a=bl
to be equal to:
#
" a=h =FET Hc b#
(3.9.15)
it follows that:
=h =FET
Hc
" # Hc#
(3.9.16)
From Eq. 3.9.15 we can conclude that by putting Hc "# the denominator is
reduced to zero, thus =h _ . This means that for such a capacitive ratio the
magnitude never falls below "# . However attractive the possibility of achieving
infinite bandwidth may seem, this can never be realized in practice, because any signal
source will have some, although small, internal resistance VG , resulting in an
additional input pole =$ "VG Gi , where Gi is the total input capacitance of the
JFET. The complete transfer function will now be (see Fig. 3.9.7 and 3.9.8):
J a=b
=" = $
= =#
-3.83-
(3.9.17)
P. Stari, E. Margan
3.9.2 Phase
We obtain the phase response from Eq. 3.9.7 by taking the arctangent of the
ratio of the imaginary to the real part of J a4=b:
:a=b arctan
eeJ a4=bf
d eJ a4=bf
(3.9.18)
Since in Eq. 3.9.7 we have a single real pole an a single real zero, the resulting
phase angle is calculated as:
:= arctan
=
=
arctan
=FET
=FET Dc
(3.9.19)
In Fig. 3.9.4 the three phase plots with same GL Ggs ratios are shown. Because
of the zero, the phase returns to the initial value at high frequencies.
0
5
a
10
[ ]
15
20
V dd
25
gm
30
35
40
45
0.01
FET =
Vs
VG
Cgs
RG = 0
C L / Cgs
CL
Is
a)
b)
c)
V ss
0.1
gm
Cgs
1.0
/ FET
0.5
1.0
2.0
10.0
100.0
Fig. 3.9.4: Phase plots of the JFET source follower for the same three capacitance ratios.
.:
.=
(3.9.20)
but we usually prefer the normalized expression 7e =h . In our case, however, the upper
cut-off frequency, =h , is changing with the capacitance divider Hc . So instead of =h we
-3.84-
P. Stari, E. Margan
shall, rather, normalize the envelope delay to the characteristic frequency of the JFET
itself, =FET :
7e =FET
"
Hc
" a==FET b#
Hc# a==FET b#
(3.5.21)
The envelope delay plots for the three capacitance ratios are shown in Fig. 3.9.5.
Note that for all three ratios there is a frequency region in which the envelope delay
becomes positive, implying that the output signal advance, in correlation with the
phase plots, goes up with frequency. We have explained the physical background of
this behavior in Part 2, Fig. 2.2.5, 2.2.6. The positive envelope delay influences the
input impedance in a very unfavorable way, as we shall soon see.
0.5
0.0
a)
b)
c)
Advance
C L / Cgs
0.5
1.0
2.0
Delay
T FET
a
0.5
FET =
gm
Cgs
V dd
1.0
gm
Vs
c
1.5
VG
Cgs
RG = 0
2.0
0.01
Is
CL
V ss
0.1
1.0
/ FET
10.0
100.0
Fig. 3.9.5: The JFET envelope delay for the three capacitance ratios. Note the positive
peak (phase advance region): trouble in sight!
Ka=b
"
"
= =#
J a=b
Hc
=
=
= ="
="
-3.85-
a= =# b e=>
= = ="
(3.9.22)
(3.9.23)
P. Stari, E. Margan
=#
a= =# b e=>
res0 lim =
="
= p ! = = ="
(3.9.24)
e
= p ="
= = ="
="
(3.9.25)
=#
=" =# =" >
e
="
="
(3.9.26)
and, by considering Eq. 3.9.12 and 3.9.13, as well as that =FET "7FET and using the
normalized time >7FET , we end up with:
ga>b " a" Hc b eHc >7FET
(3.9.27)
The plot of this relation is shown in Fig. 3.9.6, again for the same three
capacitance ratios. The initial output signal jump at > ! is the input signal cross-talk
(through Ggs ) multiplied by the Hc factor:
ga!b Hc
(3.9.28)
Following the jump is the exponential relaxation towards the normal follower action at
lower frequencies. If the input pole =$ "VG Gi is taken into account, the jump
would be slowed down to an exponential rise with a time constant of VG Gi .
1.2
vG
1.0
0.8
t
ansi
... tr
b
0.6
0.4
0.2
C
FET = g gs
m
V dd
gm
s
C L / Cgs
a)
b)
c)
0.0
0.2
.
to ..
ion
c
Capacitive divider
s
G
Cgs
G
0.5
1.0
2.0
4
t / FET
RG = 0
CL
Is
V ss
Fig. 3.9.6: The JFET source follower step response for the three capacitance ratios.
-3.86-
P. Stari, E. Margan
VG
@g
"
= Ggd
@g @ s
"
= Ggs
(3.9.29)
@s
"
= GL
(3.9.30)
= GL
= Ggs gm
(3.9.31)
= GL
a" VG = Ggd VG = Ggs b @s VG = Ggs
= Ggs gm
(3.9.32)
@s
@G
"
= GL
c" VG = aGgd Ggs bd
" VG = Ggd
= Ggs gm
(3.9.33)
Now we put this into the normalized canonical form and use Eq. 3.9.5 again to
replace the term gm Ggs with =FET . Also, we express all the time constants as functions
of =FET and the appropriate capacitance ratios. Finally, we want to see how the reponse
depends on the product gm VG , so we multiply all the terms containing VG with gm and
compensate each of them accordingly. The final expression is:
Ggs
Ggd
"
a= =FET b
GL
GL
gm VG
"
Ggs
Ggd
Ggs
Ggs
"
GL
=FET "
=#FET
V
G
G
G
g
"
m
G
gd
gd
gd
=# =
GL
GL
GL
GL
gm VG
"
"
Ggs
Ggd
Ggs
Ggd
(3.9.34)
=FET
@s
@G
Ggs
&,
Ggd
=FET ",
= 4=,
-3.87-
and
gm VG !$; "; $
P. Stari, E. Margan
2.0
gm RG
CL
=1
Cgs
a)
b)
c)
1.0
0.3
1.0
3.0
0.7
0.5
Vs
VG
FET =
RG = 0
gm
Cgs
V dd
RG
gm
Vs
0.2
Cgs
VG
C gd
CL
Is
V ss
0.1
0.01
0.1
1.0
/ FET
10.0
100.0
Fig. 3.9.7: The JFET source follower frequency response for a ratio GL Ggs " and a
variable signal source impedance, so that VG gm is 0.3, 1, and 3, respectively. Note the
response peaking for gm VG $.
1.2
G
1.0
s
G
RG = 0
0.8
0.6
0.4
FET =
gm
s
Cgs
g m RG
a)
b)
c)
0.0
0.2
V dd
RG
CL
=1
Cgs
0.2
gm
Cgs
Cgd
0.3
1.0
3.0
CL
Is
V ss
4
t / FET
Fig. 3.9.8: The JFET source follower step response for the same conditions as in Fig. 3.9.7.
-3.88-
P. Stari, E. Margan
3i
= Ggs
(3.9.35)
3i
"
= Ggs gm
!
= Ggs
^L
(3.9.36)
Because the JFET source is biased from a constant current generator (whose
impedance we assume to be infinite) the loading admittance is "^L = GL . Let us put
this back into Eq. 3.3.36 and rearrange it a little:
@G = GL 3i "
Furthermore:
@G 3 i
3i
GL
gm
= Ggs
Ggs
(3.9.37)
gm
"
"
#
= GL
= Ggs GL
= Ggs
= aGgs GL b gm
=# Ggs GL
(3.9.38)
@g
= aGgs GL b gm
3i
=# Ggs GL
(3.9.39)
To see more clearly how this impedance is comprised, we invert it to find the
admittance and apply the continuous fraction synthesis in order to identify the
individual components.
Ggs GL
gm =
#
=
G
G
G
G
G
gs L
gs L
gs GL
]i w
=
= aGgs GL b gm
Ggs GL
= aGgs GL b gm
(3.9.40)
The first fraction is the admittance of the capacitances Ggs and GL connected in series.
Let us name this combination Gx :
Ggs GL
Gx
(3.9.41)
Ggs GL
-3.89-
P. Stari, E. Margan
The second fraction, which has a negative sign, must be further simplified. We invert it
again, and after some simple rearrangement we obtain the impedance:
^x
aGgs GL b#
Ggs GL
gm Ggs GL
= Ggs GL
(3.9.42)
The first part is interpreted as a negative resistance, which we will label Vx in order
to follow the negative sign in the following analysis more clearly:
Vx
aGgs GL b#
gm Ggs GL
(3.9.43)
The second part as a negative capacitance, which we label G x because it has the same
absolute value as Gx from the Eq. 3.9.41:
Gx
Ggs GL
Ggs GL
(3.9.44)
Now that we have all the components we can reintroduce the gatedrain capacitance
Ggd , so that the final equivalent input impedance looks like Fig. 3.9.9. We can write the
complete input admittance:
"
]i 4= aGgd Gx b
Cgd
Zi
Cgd
Cx
Vx
V dd
V dd
Q1
Q1
Rx
Cx
LG
Cgs
Cgs
Is
CL
a)
(3.9.45)
"
4= Gx
b)
V ss
Is
CL
c)
Vss
Fig. 3.9.9: a) The equivalent input impedance of the capacitively loaded JFET source
follower has negative components which can be a nuisance if, as in b), the signal source
has an inductive impedance, forming c) a familiar Colpitts oscillator. If Ggd is small, the
circuit will oscillate for a broad range of inductance values.
We can separate the real and imaginary part of ]i by putting Eq. 3.9.45 on a
common denominator:
]i e]i f e]i f
-3.90-
(3.9.46)
P. Stari, E. Margan
The negative real part can cause some serious trouble [Ref. 24]. Suppose we are
troubleshooting a circuit with a switching power supply and we suspect it to be a cause
of a strong electromagnetic interference (EMI); we want to use a coil with an
appropriate inductance P (which, of course, has its own real and imaginary admittance)
to inspect the various parts of the circuit for EMI intensity and field direction. If we
connect this coil to the source follower and if the coil resistance is low, we would have:
e]P f e]i f !
(3.9.47)
and the source follower becomes a familiar Colpitts oscillator, Fig. 3.9.9c [Ref. 25].
Indeed, some older oscilloscopes would burst into oscillation if connected to such a
coil and with its input attenuator switched to maximum sensitivity (a few highly priced
instruments built by respectable firms, back in early 1970's, were no exception).
By taking into account Eq. 3.9.42, 3.9.43 and 3.9.9 and substituting
=FET gm Ggs , the real part of the input impedance can be rewritten as:
]i Ki gm
GL
==FET #
Ggs
" ==FET Hc #
(3.9.48)
The last fraction represents the normalized frequency dependence of this admittance:
KiN
==FET #
" ==FET Hc #
(3.9.49)
Fig. 3.9.10 shows the plots of KiN for the same ratios of GL Ggs as before. Note
the quadratic dependence (of Hc ) at high frequencies.
0
G iN
b
C L / C gs
8
10
0.01
a)
b)
c)
0.5
1.0
2.0
0.1
1.0
/ FET
10.0
100.0
-3.91-
P. Stari, E. Margan
the JFET gate; since we should have some series resistance anyway in order to protect
the sensitive input from static discharge or accidental overdrive, this will seem to be the
preferred choice. However, after a closer look, this protection resistance is too small to
prevent oscillations in case of an inductive signal source impedance. The required
resistance value which will guarantee stability in all conditions will be so high that the
bandwidth will be reduced by nearly an order of magnitude. Thus this method of
compensation is used only if we do not care how much bandwidth we obtain.
A more elegant method of compensation is the one which we have used already
in Fig. 3.5.3. If we introduce a serially connected Vx Gx network in parallel with the
JFET input, as shown in Fig. 3.9.11, we obtain ]x ! and ^x _. Note the
corresponding phasor diagram: we first draw the negative components, Vx and Gx ,
find the impedance vector ^x and invert it to find the negative admittance, ]x . We
then compensate it by a positive admittance ]x such that their sum ]ic !. We finally
invert ]x to find ^x and decompose it into its real and imaginary part, Vx and Gx :
]ic
Rx
"
"
Vx
4= G x
Rx
Y ic = 0
"
Vx
(3.9.50)
Yx
Rx
Cx
Zx
1
j Cx
"
4= G x
Cx
Rx
Y x
Zx
a)
1
j Cx
b)
Fig. 3.9.11: a) The negative components of the input impedance can be compensated by an
equal but positive network, connected in parallel, so that their admittances sum to zero
(infinite impedance). In b) we see the coresponding phasor diagram.
With this compensation, the total input impedance is the one belonging to the
parallel connection of Ggd with Gx and assuming a 1 MH gate bias resistor Vin :
^i
"
Vin
"
Ggs GL
4=aGgd Gx b
" 4=Ggd
Vin
Vin
Ggs GL
(3.9.51)
The analysis of the input impedance would be incomplete without Fig. 3.9.12,
where the Nyquist diagrams of the impedance are shown revealing its frequency
dependence, as well as the influence of different signal source impedances.
-3.92-
P. Stari, E. Margan
V dd
Zi
f =
XC = 0
Q1
a)
Cin
Is
CL
Rx 0
Q1
b)
f =
XC = 0
Rin Cin Cx
Is
Zi
CL
Rx
V dd
X LG
Q1
Zi
vG
f =
Rx
f osc
Is
Zi
CL
Rx
X LG
Rx
Q1
Zi
Cx
Cx
Rx
Rx
Rin Cin
G
f>0
V dd
LG
V ss
d)
Cx
Rin Cin
f>0
V ss
LG
V dd
c)
f>0
| XC | = Rin
V ss
Zi
Rin = 1M
Zi
Rin
f =0
0
s
Is
CL
f =
Rx
s
Zi
f>0
V ss
Fig. 3.9.12: a) The input impedance of the JFET source follower, assumed to be purely capacitive and in
parallel with a 1 MH gate biasing resistor; thus at 0 ! we see only the resistor and at 0 _ the
reactance of the input capacitance is zero; b) the negative input impedance components affect the input
impedance near the origin; c) with an inductive signal source, the point in which the impedance crosses
the negative real axis corresponds to the system resonant frequency, provoking oscillations. d) The
compensation removes the negative components.
-3.93-
P. Stari, E. Margan
In Fig. 3.9.12a the JFET gate is tied to ground by a 1 MH resistor, which, with a
purely capacitive input impedance, would give a phasor diagram in the form of a half
circle with frequency varying from DC to infinity.
In Fig. 3.9.12b we concentrate on the small area near the complex plane origin
(high frequencies, close to 0FET ), where we draw the influence of the negative input
impedance components, assuming a resistive signal source.
In Fig. 3.9.12c an inductive signal source (with a small resistive component)
will cause the impedance crossing the real axis in the negative region, therefore the
circuit would oscillate at the frequency at which this crossing occurs.
Finally, in Fig. 3.9.12d we see the same situation but with the negative
components compensated as in Fig. 3.9.11. Note the small loop in the first quadrant of
the impedance plot it is caused by the small resistance VG of the coil PG , the coil
inductance, and the total input capacitance Gin .
In Fig. 3.9.13 we see yet another way of compensating the negative input
impedance. Here the compensation is achieved by inserting a small resistance Vd in the
drain, thus allowing the anti-phase signal at the drain to influence the gate via Ggd and
cancel the in-phase signal from the JFET source via Ggs . This method is sometimes
preferred over the former method, because the PCB pads, which are needed to
accommodate the additional compensation components, also create some additional
parasitic capacitance from the gate to ground.
V dd
Rd
Cgd
Zi
i comp
Cx
ix
id
Q1
s
Cgs
Is
Rx
CL
V ss
-3.94-
P. Stari, E. Margan
Rsum of Part 3
In this part we have analyzed some basic circuits for wideband amplification,
examined their most important limitations, and explained several ways of improving
their high frequency performance. The property which can cause most trouble, even for
experienced designers, is the negative input impedance of some of the most useful
wideband circuits, and we have shown a few possible solutions.
The reader must realize, however, that the analytical tools and solutions
presented are by no means the ultimate design examples. For a final design, many other
aspects of circuit performance must also be carefully considered, and, more often than
not, these other factors will compromise the wideband performance severely.
As we have indicated at some points, there are ways of compensating certain
unwanted circuit behavior by implementing the system in a differential configuration,
but, on the negative side, this doubles the number of active components, increasing
cost, power dissipation, circuit size, strays and parasitics and also the production and
testing complexity. From the wideband design point of view, having many active
components usually means many more poles and zeros that must be carefully analyzed
and appropriately tuned.
In Part 4 and Part 5 we shall explain some theoretical and practical techniques
for an efficient design approach at the system level.
-3.95-
P. Stari, E. Margan
References:
[3.1]
C.R. Battjes, Amplifier Frequency Response and Risetime, AFTR Class Notes
(Amplifier Frequency and Transient Response), Tektronix, Inc., Beaverton, 1977
[3.2]
[3.3]
[3.4]
[3.5]
[3.6]
[3.7]
P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969
[3.8]
[3.9]
[3.10]
H.K. Gummel & H.C. Poon, An Integral Charge Control Model of Bipolar Transistors,
Bell Systems Technical Journal, Vol. 49, pp. 827852, May 1970
[3.11]
[3.12]
P.E. Gray & C.L Searle, Electronic Principles, Physics, Models and Circuits,
John Wiley, New York, 1969
[3.13]
[3.14]
[3.15]
[3.16]
[3.17]
J.M. Pettit & M. M. McWhorter, Electronic Amplifier Circuits, Theory and Design,
McGraw-Hill, New York, 1961
[3.18]
[3.19]
[3.20]
[3.21]
[3.22]
-3.97-
P. Stari, E. Margan
[3.23]
[3.24]
[3.25]
[3.26]
N .L. E..3=, private e-mail exchange with the authors (see JAemail.PDF)
[3.27]
[3.28]
SPICE - An Overview,
http://www.seas.upenn.edu/~jan/spice/spice.overview.html
[3.29]
ORCAD PSpice page - Info and download of latest free evaluation version,
http://www.orcad.com/products/pspice/pspice.htm
[3.30]
[3.31]
R.J. Widlar, Some Circuit Design Techniques for Linear Integrated Circuits,
IEEE Transactions onCircuit Theory, Vol. CT-12, Dec. 1965, pp. 586590
[3.32]
[3.33]
[3.34]
[3.35]
[3.36]
-3.98-
P. Stari, E. Margan:
Wideband Amplifiers
Part 4:
P.Stari, E.Margan
- 4.2 -
P.Stari, E.Margan
4.27
4.31
4.32
4.33
4.33
4.36
4.55
4.55
4.56
4.59
- 4.3 -
P.Stari, E.Margan
List of Figures:
Fig. 4.1.1: A multi-stage amplifier with identical, DC coupled, VG loaded stages ............................... 4.9
Fig. 4.1.2: Frequency response of a 8-stage amplifier, 8 ""! ........................................................ 4.10
Fig. 4.1.3: A slope of 6 dB/octave equals 20 dB/decade ................................................................ 4.11
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, 8 ""! ........................................................... 4.12
Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, 8 ""! ..................................................... 4.13
Fig. 4.1.6: Amplifier with 8 identical DC coupled stages, excited by the unit step .............................. 4.13
Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, 8 ""! ....................................................... 4.15
Fig. 4.1.8: Slew rate limiting: definition of parameters ........................................................................ 4.17
Fig. 4.1.9: Minimal relative rise time as a function of total gain and number of stages ....................... 4.18
Fig. 4.1.10: Optimal number of stages required for minimal rise time at given gain ............................ 4.20
Fig. 4.2.1:
Fig. 4.2.2:
Fig. 4.2.3:
Fig. 4.2.4:
Fig. 4.2.5:
Fig. 4.3.1:
Fig. 4.3.2:
Fig. 4.3.3:
Fig. 4.3.4:
Fig. 4.3.5:
Fig. 4.3.6:
Fig. 4.3.7:
Fig. 4.3.8:
Impulse response of three different complex conjugate pole pairs ...................................... 4.28
Butterworth poles for the system order 8 "& ................................................................. 4.30
Frequency response magnitude of Butterworth systems, 8 ""! .................................... 4.31
Phase response of Butterworth systems, 8 ""! ............................................................. 4.32
Envelope delay of Butterworth systems, 8 ""! ............................................................. 4.33
Step response of Butterworth systems, 8 ""! ............................................................... 4.34
Ideal MFA frequency response ........................................................................................... 4.36
Step response of a network having an ideal MFA frequency response ............................... 4.36
Fig. 4.6.1:
Fig. 4.6.2:
Fig. 4.6.3:
Fig. 4.6.4:
Fig. 4.6.5:
Comparison of frequency responses of systems with staggered vs. repeated pole pairs ...... 4.63
Step response comparison of systems with staggered vs. repeated pole pairs ..................... 4.64
Individual stage step response of a 3-stage, 5-pole system ................................................. 4.66
Step response of the complete 3-stage, 5-pole system, reverse pole order .......................... 4.66
Step response of the complete 3-stage, 5-pole system, correct pole order .......................... 4.67
- 4.4 -
P.Stari, E.Margan
List of Tables:
Table 4.1.1: Values of the upper cut off frequency of a multi-stage amplifier for 8 ""! ................ 4.11
Table 4.1.2: Values of relative rise time of a multi-stage amplifier for 8 ""! ................................ 4.15
Table 4.2.1: Values of the lower cutoff frequency of an AC coupled amplifier for 8 ""! ............. 4.23
Table 4.3.1: Butterworth poles of order 8 ""! ............................................................................... 4.35
Table 4.4.1: Relative bandwidth improvement of systems with Bessel poles .......................................
Table 4.4.2: Relative rise time improvement of systems with Bessel poles ..........................................
Table 4.4.3: Bessel poles (equal envelope delay) of order 8 ""! ...................................................
Table 4.4.4: Bessel poles (equal cut off frequency) of order 8 ""! ................................................
4.43
4.46
4.48
4.54
Table 4.5.1: Modified Bessel poles (equal asymptote as Butterworth) ................................................. 4.58
- 4.5 -
P.Stari, E.Margan
4.0 Introduction
In a majority of cases the desired gain bandwidth product is not achievable
with a single transistor amplifier stage. Therefore more stages must be connected in
cascade. But to do it correctly we must find the answer to several questions:
Should all stages be equal or different?
What is the optimal pole pattern for obtaining a desired response?
Is it important which pole (pair) is assigned to which stage?
Is it better to use many simple (first- and second-order) stages or is it worth
the trouble to try more complex (third-, fourth- or higher order) stages?
What is the optimum single stage gain to achieve the greatest possible
gain bandwidth product for a given number of stages?
Is it possible to construct an ideal multi-stage amplifier with either
maximally flat amplitude (MFA) or maximally flat envelope delay (MFED)
response and how close to the ideal response can we come?
These are the main questions which we shall try to answer in this part.
In Sec. 4.1 we discuss a cascade of identical DC coupled amplifier stages, with
loads consisting of a parallel connection of a resistance and a (stray) capacitance. There
we derive the formula for the calculation of an optimum number of amplifying stages to
obtain the required gain with the smallest rise time possible for the complete amplifier.
Next we derive the expression for the optimum gain of an individual amplifying
stage of a multi-stage amplifier in order to achieve the smallest possible rise time. We
also discuss the effect of AC coupling between particular stages by means of a simple
VG network.
Butterworth poles, which are needed to achieve an MFA response, are derived
next. This leads to the discussion of the (im)possibility to design an ideal MFA
amplifier.
Then we derive the Bessel poles which provide the MFED response. Since they
are derived from the condition for a unit envelope delay, the upper cut off frequency
increases with the number of poles. Therefore we also present the derivation of two
different pole normalizations: to equal cut off frequency and to equal stop band
asymptote. We discuss the (im)possibility of designing an amplifier with the frequency
response approaching an ideal Gaussian curve. Further we discuss the interpolation
between the Bessel and the Butterworth poles.
Finally, we explain the merit of using staggered Bessel poles versus repeated
second-order Bessel pole pairs.
Wherever practical, we calculate and plot the frequency, phase, group delay and
step response to allow a quick comparison of different concepts.
- 4.7 -
P.Stari, E.Margan
Vo2
Vi2
Q1
R1
Vg
Vo n
Vi3
Q2
R2
C1
Vi n
Qn
Rn
C2
(4.1.1)
gm V
" ==h #
(4.1.2)
E 5 gm V
with the magnitude:
lE5 l
where:
E E" E# E$ E8 E85
- 4.9 -
gm V
" 4 = VG
(4.1.3)
P.Stari, E.Margan
lEl
gm V
(4.1.4)
#
" a==h b
H
h =
|A|
A0
1/n
h = 1/ RC
1.0
0.7
0.5
n=1
2
3
10
0.2
0.1
0.1
0.2
1.0
/ h
0.5
2.0
5.0
10.0
Fig. 4.1.2: Frequency response of an 8-stage amplifier (8 ", #, , "!). To compare the
bandwidth, the gain was normalized, i.e., divided by the system DC gain, gm V8 . For each 8,
the bandwidth (the crossing of the !(!( level) shrinks by #"8 " .
The upper half power frequency of the amplifier can be calculated by a simple relation:
8
"
#
" a=H =h b
"
#
By squaring we obtain:
8
" a=H =h b# #
(4.1.5)
=H
"8
# "
=h
(4.1.6)
The upper half power frequency of the complete 8-stage amplifier is:
=H =h #"8 "
- 4.10 -
(4.1.7)
P.Stari, E.Margan
At high frequencies, the first stage response slope approaches the 6 dBoctave
asymptote (20 dBdecade). The meaning of this slope is explained in Fig. 4.1.3. For
the second stage the slope is twice as steep, and for the 8th stage it is 8 times steeper.
| A|
[dB]
A0
2
n=1
3 dB
4
6
8
10
dB
12
/2
f
=
14
2
B
0d
/ 10
16
20
18
0.1
0.2
1.0
/ h
0.5
2.0
10.0
5.0
Fig. 4.1.3: The first-order system response and its asymptotes. Below the cut off, the
asymptote is the level equal to the system gain at DC (normalized here to ! dB). Above the
cut off, the slope is ' dB/octave (an octave is a frequency span from 0 to #0 ), which is
also equal to #! dB/decade (a frequency decade is a span from 0 to "! 0 ).
"
&
'
"!
=H ".!!! !.'%% !.&"! !.%$& !.$)' !.$&! !.$#$ !.$!" !.#)$ !.#'*
With ten equal stages connected in cascade the bandwidth is reduced to a poor
!.#'* =h ; such an amplifier is definitively not very efficient for wideband amplification.
Alternatively, in order to preserve the bandwidth a 8-stage amplifier should
have all its capacitors reduced by the same factor, #"8 " . But in wideband
amplifiers we already strive to work with stray capacitances only, so this approach is
not a solution.
Nevertheless, the amplifier in Fig. 4.1.1 is the basis for more efficient amplifier
configurations, which we shall discuss later.
- 4.11 -
P.Stari, E.Margan
eJ 4 =
arctan = =h
d J 4 =
(4.1.8)
where J 4 = is taken from Eq. 4.1.1. For 8 equal stages the total phase angle is simply
8 times as much:
:8 8 arctan ==h
(4.1.9)
The phase responses are plotted in Fig. 4.1.4. Note the high frequency
asymptotic phase shift increasing by 1# (or 90) for each 8. Also note the shift at
= =h being exactly 8 1%, in spite of a reduced =h for each 8.
0
n=1
2
3
180
( )
[]
360
540
10
720
900
0.1
n = n arctan ( / h )
0.2
0.5
h = 1/ RC
1.0
/ h
2.0
5.0
10.0
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, for 8 110 amplifying stages.
7e = h
"
" ==h #
- 4.12 -
(4.1.10)
P.Stari, E.Margan
8
" ==h #
(4.1.11)
Fig. 4.1.5 shows the frequency dependent envelope delay for 8 ""!. Note the
delay at = =h being exactly "# of the low frequency asymptotic value.
0.0
n=1
2
2.0
3
4.0
en h
6.0
8.0
10.0
h = 1/ RC
10
0.1
0.2
1.0
/ h
0.5
2.0
5.0
10.0
Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, for 8 110 amplifying stages. The
delay at = =h is "# of the low frequency asymptotic value. Note that if we were using 0 0h
for the abscissa, we would have to divide the 7e scale by #1.
Vo1
Vi1
Vg
Vi2
Q1
R1
C1
Vo n
Vi3
Q2
R2
C2
Vi n
Qn
Rn
Cn
Fig. 4.1.6: Amplifier with 8 equal DC coupled stages, excited by the unit step.
We can derive the step response expression from Eq. 4.1.1 and Eq. 4.1.3. In
order to simplify and generalize the expression we shall normalize the magnitude by
dividing the transfer function by the DC gain, gm V, and normalize the frequency by
setting =h "VG ". Since we shall use the _" transform we shall replace the
variable 4 = by the complex variable = 5 4 =.
- 4.13 -
P.Stari, E.Margan
J =
(4.1.12)
The amplifier input is excited by the unit step, therefore we must multiply the
above formula by the unit step operator "=:
K=
"
= " =8
(4.1.13)
e= >
= " =8
(41.14)
e= >
"
.8"
8
res" lim
" =
8"
= " =8
= p " 8 "x .=
e= >
"
. 8"
lim
8"
=
= p " 8 "x .=
(4.1.15)
e>
(4.1.16)
res"
(4.1.17)
res"
8"
for 8 #:
8#
for 8 $:
8$
>#
#
(4.1.18)
etc.
The general expression for the step response for any 8 is:
8
>5"
g8 > _" G= res! res" " e> "
5 "x
5 ="
(4.1.19)
>
>#
>$
>%
"x
#x
$x
%x
- 4.14 -
(4.1.20)
P.Stari, E.Margan
The step response plots for 8 ""!, calculated by Eq. 4.1.19, are shown in
Fig. 4.1.7. Note that there is no overshoot in any of the curves. Unfortunately, the
efficiency of this kind of amplifier in the sense of the bandwidth per number of stages
is poor, since it has no peaking networks which would prevent the decrease of
bandwidth with 8.
1.0
n=1
0.8
2
3
g n( t )
0.6
10
0.4
0.2
0.0
t /RC
12
16
20
Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, for 8 110 amplifying stages.
(4.1.21)
In Part 2, Sec. 2.1.1, Eq. 2.1.1 4, we have calculated the rise time of an
amplifier with a simple VG load to be 7r" ##! VG . Since here we have 8 equal
stages the rise time of the complete amplifier is:
7r 7r" 8 ##! VG 8
(4.1.22)
Table 4.1.2 shows the rise time increasing with the number of stages:
Table 4.1.2
8
7r8 7r"
"
"!!
#
"%"
$
"($
%
#!!
&
##%
- 4.15 -
'
#%&
(
#'&
)
#)$
*
$!!
"!
$"'
P.Stari, E.Margan
.>
G
(4.1.23)
Here Momax is the maximum output current available to drive the loading capaciatnce G .
If @o is a sinusoidal signal of angular frequency =FP and amplitude Zmax , such
that @o Zmax sin =FP >, the slope varies with time as:
.Zmax sin =FP >
Zmax =FP cos =FP >
.>
(4.1.24)
and it has a maximum at lcos =FP >l " (which is at > ! 1=FP ; see Fig. 4.1.8a).
Therefore:
slew rate:
WV Zmax =FP
(4.1.25)
The slew rate is usually expressed in volts per microsecond [V.s]; for contemporary
amplifiers a more appropriate figure would be volts per nanosecond [Vns].
If we increase the signal frequency beyond =FP the waveform will eventually be
distorted into a linear ramp shape, but reduced in amplitude because the slope of a
sinusoidal signal is reduced to zero at the peak voltage Zmax . However, if driven by a
step signal of the same amplitude the linear ramp will span the full amplitude range. We
need to find the total slewing time between Zmax and Zmax ; by equating the small
signal derivative .@o .> with the large signal slewing ?Z ?> we find:
WV
.@o
# Zmax
?Z
.>
>slew
?>
(4.1.26)
# Zmax
#
XFP
Zmax =FP
=FP
1
(4.1.27)
>slew
where XFP #1=FP , i.e., the period of the full power bandwidth sinewave signal.
- 4.16 -
P.Stari, E.Margan
Vmax
Vmax sin ( FP t )
90%
h( t )
T FP
SR =
Vm
FP
t=
FP
max
SR =
V
t =
FP
ax
FP
Vmax cos ( FP t )
= 2
FP
10%
Vmax
Vmax
rS
t slew
a)
b)
Fig. 4.1.8: Definitions of Slew rate limiting parameters; a) the slew rate is equal to the highest
.@.> of the full-power undistorted sine wave; b) the slewing time defined as the large signal
step response; the equivalent slewing frequency (of the triangle waveform) is higher than =FP of
the sine wave.
XFP
!#&%' XFP
1
(4.1.28)
@o8
E" E# E8
@i"
(4.1.29)
If all the amplifying stages are identical, we denote the individual stage gain as
E5 , the loading resistors V, and the loading capacitances G . Then the total gain is:
E E85
- 4.17 -
(4.1.30)
P.Stari, E.Margan
The rise time of the complete multi-stage amplifier is equal to the square root of
the sum of the individual rise times squared, but since the amplitude after each gain
stage is different, we must normalize the rise times by multiplying each with it's own
gain factor:
8
7r ! aE3 7r3 b#
(4.1.31)
3"
We have assumed that all stages have identical gain, E3 E5 , and rise time, 7r3 7r5 .
Thherefore:
7r 8 E5 7r5 # 8 E5 7r5 8 E5 ## VG
(4.1.32)
where 7r5 is the rise time of an individual stage, as calculated in Part 2. By considering
Eq. 4.1.30, we obtain the following relation:
7r
8 E"8
7r5
(4.1.33)
We have ploted this relation in Fig. 4.1.9, using the system total gain E as the
parameter. Note that, in order to see the function 7r a8b better, we have assumed a
continuous 8; of course, we can not have, say, 4.63 stages 8 must be an integer.
10
9
+10% tolerance
r 7
rk
6
50
100
200
500
1000
20
10
5
3
2
minima
1
0
0
6
7 8
9 10
n [number of stages]
11
12
13 14
15
Fig. 4.1.9: Minimal relative rise time as a function of the number of stages 8
and the total system gain E. Close to the minima the curves are relatively flat,
so in practice we can trade off, say, a 10% increase in the system rise time and
reduce the required number of stages accordingly; i.e., to achieve the gain of
100, only 5 stages could be used instead of 9, with a slight rise time increase
From this diagram we can find the optimum number of the amplifying stages
8opt if the total system gain E is known. These optima lie on the valleys of the curves
and in the following discussion we will derive the necessary formulae.
- 4.18 -
P.Stari, E.Margan
!
`8
`8
(4.1.34)
(4.1.35)
Because neither 7r5 nor E are zero we equate the expression in parentheses to zero:
"
8 ln E !
# 8
(4.1.36)
" # 8 ln E !
8
"
# ln E
# ln E
(4.1.37)
Therefore the optimum number of amplifying stages for a given total gain E is:
" 8opt inta# ln Eb
(4.1.38)
and since we can not have, say, 3.47 amplifying stages, we round the result to the
nearest integer, the smallest obviously being ".
On the basis of this simple relation we can draw the line a in Fig. 4.1.10 for a
quick estimation of the number of amplifying stages necessary to obtain the smallest
rise time if the total system gain E is known. Again, the required number of amplifying
stages can be reduced in practice, as indicated in Fig. 4.1.10 by the line b, without
significantly increasing the rise time. Owing to reasons of economy, the most simple
systems are often designed far from optimum, as indicated by the bars and the line c.
Eq. 4.1.33 and 4.1.38 and the corresponding diagrams are valid also for peaking
stages, although peaking stages can be designed much more efficiently, as we shall see.
From Eq. 4.1.38 we can find the optimal gain value of the individual stage,
independent of the actual number of stages in the system. For 8 equal stages it is:
E5 E"8 E"# ln E
(4.1.39)
"
"
ln E
# ln E
#
(4.1.40)
The optimal individual stage gain for the total minimal rise time is then:
E5 opt e"# e "'&
- 4.19 -
(4.1.41)
P.Stari, E.Margan
This expression gives us the optimal value of the gain of the individual
amplifying stage which minimizes the total rise time of a multi-stage amplifier. In
practice we usually take a higher value, say, between 2 and 4, in order to decrease the
cost and simplify the design. Eq. 4.1.41 can also be used for peaking stages.
14
13
12
11
10
9
7
6
5
4
3
2
1
0
10
c
1
10
10
10
Fig. 4.1.10: The optimal number of stages, 8, required to achieve the minimal
rise time, given the total system gain E, as calculated by Eq. 4.1.38, is shown
by line a. In practice, owing to economy reasons, we tend to use a lower
number; the line b shows the same +10% rise time trade off as in Fig. 4.1.9. In
low complexity systems we usually make even greater tradeoffs, as in c.
- 4.20 -
P.Stari, E.Margan
Vg
C1
Q1
R1
Vo n
Vo2
C2
RL
Vi2
Cn V i n
Q2
RL
R2
Rn
Qn
RL
Since we want to focus on essential problems only, here, too, we use FETs,
instead of BJTs, in order to avoid the complicated expression for the base input
impedance of each stage. Moreover, in a wideband amplifier we can assume VL V8 ,
so we shall neglect the effect of V8 on gain. On the other hand, V8 and G8 set the low
frequency limit of each stage, which is =" "V8 G8 "VG , if all stages are
identical. Usually, =" is many orders of magnitude below =h , so we can neglect the
stray and input capacitances (both effectively in parallel to the loading resistors VL ) as
well. Thus, near =" , the voltage gain of each stage is:
4 =="
" 4 =="
(4.2.1)
=="
" ==" #
(4.2.2)
E8 gm VL
and the magnitude is:
kE8 k gm VL
With all input time constants equal to VG , the total system gain is E8 to the 8th power:
8
E gm VL
4 =="
" 4 =="
=="
" ==" #
- 4.21 -
(4.2.3)
8
(4.2.4)
P.Stari, E.Margan
1.0
|F ( j)|
0.1
10
0.1
l =
1
RC
1.0
/ l
10.0
Fig. 4.2.2: Frequency response magnitude of the AC coupled amplifier for 8 ""!. The
frequency scale is normalized to the lower cut off frequency =" of the single stage.
It is evident that the lower half power frequency =L of the complete amplifier
increases with the number of stages. We can express =L as a function of 8 from:
8
"
=L ="
" = = #
#
L
"
(4.2.5)
(4.2.6)
th
(4.2.7)
(4.2.8)
we obtain the lower half power frequency of the complete multi-stage amplifier:
=L = "
"
#"8 "
- 4.22 -
(4.2.9)
P.Stari, E.Margan
(4.2.10)
The phase shift is positive and this means a phase advance. For 8 stages the total
phase advance is simply 8 times as much:
:8 8 arctana=" =b
(4.2.11)
The corresponding plots for 8 ""! are shown in Fig. 4.2.3. Note the phase
shift at = =" being exactly "# the low frequency asymptotic value.
900
10
720
l =
( )
[]
1
RC
540
360
3
180
2
1
0.1
1.0
/ l
10.0
Fig. 4.2.3: Phase angle as a function of frequency for the AC coupled 8-stage amplifier,
8 ""!. The frequency scale is normalized to the lower cutoff frequency of the single stage.
We will omit the calculation of envelope delay since in the low frequency region
this aspect of amplifier performance is not very important.
- 4.23 -
P.Stari, E.Margan
"=
The systems frequency response must be multiplied by the unit step operator "=:
K8 =
8
"
=
=
"=
(4.2.13)
Now we apply the _" transformation and obtain the time domain step response:
g8 > _" K8 = res K8 = e=> res
=8"
e=>
" =8
(4.2.14)
Since we have here a single pole repeated 8 times we have only a single residue,
but as we will see it is composed of 8 summands. A general expression for the
residue for an arbitrary 8 is:
"
. 8"
=8"
8
g8 > lim
e=>
" =
8"
.=
" =8
= p " 8 "x
or, simplified:
g8 >
"
. 8"
=8" e=>
8 "x
. = 8"
= p "
(4.2.15)
(4.2.16)
A few examples:
" >
e e>
!x
"
e> > e> e> " >
8 # g# >
"x
8 $ g$ > e> " # > !& >#
8 " g" >
(4.2.17)
The coefficients decrease rapidly with increasing number of stages 8; i.e., the
last summand for 8 "! is #((& "!' >* .
The corresponding plots are drawn in Fig. 4.2.4. The plots for 8 '* are not
shown, since the individual curves would impossible to distinguish. We note that the
8th -order response intersects the abscissa 8 " times.
- 4.24 -
P.Stari, E.Margan
1.0
0.8
0.6
gn ( t )
0.4
2
3
0.2
4
0.0
5
10
0.2
0.4
0.5
0.0
0.5
1.0
1.5
2.0
t RC
2.5
3.0
3.5
4.0
4.5
Fig. 4.2.4: Step response of the multi-stage AC coupled amplifier for 8 "& and 8 "!.
For pulse amplification only the short starting portions of the curves come into
consideration. An example for 8 ", $, and ) is shown in Fig. 4.2.5 for a pulse width
?> !" VG.
1.0
0.8
t
10
0.6
0.4
t = 0.1 RC
0.2
0.0
1
3
0.2
0.4
0.5
0.0
0.5
1.0
1.5
2.0
t / RC
2.5
3.0
3.5
4.0
Fig. 4.2.5: Pulse response of the AC coupled multi-stage amplifier (8 ", $, and )).
- 4.25 -
4.5
P.Stari, E.Margan
Note that the pulse in Fig. 4.2.5 sags, both on the leading and trailing edge, the
sag increasing with the number of stages. We conclude that the AC coupled amplifier of
Fig. 4.2.1 is not suitable for a faithful pulse amplification, except when the pulse
duration is very short in comparison with the time constant VG of a single amplifying
stage (say, ?> !!!" VG ).
Another undesirable property of the AC coupled amplifier is that the output
voltage makes 8 " damped oscillations when the pulse ends, no matter how short its
duration is. This is especially annoying because the input voltage is by now already
zero. The undesirable result is that the effective output DC level will depend on the
pulse repetition rate.
Since today the DC amplification technique has reached a very high quality
level, we can consider the AC coupled amplifier an inheritance from the era of
electronic tubes and thus almost obsolete. However, we still use AC coupled amplifiers
to avoid the drift in those cases where the deficiencies described are not important.
- 4.26 -
P.Stari, E.Margan
J =
(4.3.1)
where =H is the upper half power frequency of the (peaking) amplifier and 8 is an
integer, representing the number of stages. A network corresponding to this equation
has a maximally flat amplitude response (MFA). The magnitude of J = is:
lJ =l
"
" ==H # 8
(4.3.2)
!
.=
=H
c" ==H #8 d$#
=!
(4.3.3)
and not just the first derivative, but all the 8 " derivatives of an 8th -order system are
also zero at origin. This means that the filter is essentially flat at very low frequencies
(= =H ). The number of poles in Eq. 4.3.1 is equal to the parameter 8 and the flatness
of the frequency response in the passband also increases with 8. The parameter 8 is
called the system order. To derive the expression for the poles we start with the
denominator of Eq. 4.3.2, where the expression under the root can be simplified into:
" a==H b# 8 " B# 8
(4.3.4)
or
- 4.27 -
B# 8 "
(4.3.5)
P.Stari, E.Margan
The roots of these equations are the poles of Eq. 4.3.1 and they can be calculated
by the following general expression:
"
B #8
(4.3.6)
(4.3.7)
"#;
"#;
4 sin1
#8
#8
(4.3.8)
If we insert the value !, ", #, , #8 " for ; , we obtain #8 roots. The roots
lie on a circle of radius < ", spaced by the angles 1#8. With this condition no pole is
repeated. One half ( 8) of the poles lie in the left side of the =-plane; these are the
poles of Eq. 4.3.1. None of the poles lies on the imaginary axis. The other half of the
poles lie in the positive half of the =-plane and they can be associated with the complex
conjugate of J a4 =b; as shown in Fig. 4.3.1, owing to the Hurwitz stability requirement,
they are not useful for our purpose.
e 0.8 t
e0.8 t sin t
sin t
e 0.8 t sin t
e 0.8 t
a
s1a
s1c
0.8
0.8
s2a
s1b
=1
s2b j
s2c
Fig. 4.3.1: Impulse response of three different complex conjugate pole pairs: The real part
determines the system stability: ="a and s#a make the system unconditionally stable, since the
negative exponent forces the response to decrease with time; ="b and =#b make the system
conditionally stable, whilst ="c and =#c make it unstable.
- 4.28 -
P.Stari, E.Margan
This left and right half pole division is not arbitrary, but, as we have explained
in Part 1, it reflects the direction of energy flow. If an unconditionally stable system is
energized and then left alone, it will eventually dissipate all the energy into heath and
RF radiation, so it is lost (from the system point of view) and therefore we agree to give
it a negative sign. This is typical of a dominantly resistive systems. On the other hand,
generators produce energy and we agree to give them a positive sign. In effect,
generators can be treated as negative resistors. Inductances and capacitances can not
dissipate energy, they can only store it in their associated electromagnetic fields (for a
while). We therefore assign the resistive and regenerative action to the real axis and the
inductive and capacitive action to imaginary axis.
For example, if we take a twopole system with poles forming a complex
conjugate pair, =" 5 4= and =# 5 4=, the system impulse response function
has the form:
0 > e5> sin = >
(4.3.9)
By referring to Fig. 4.3.1, let us first consider the poles ="a !) 4 and
=#a !) 4, where = ". Their impulse function is a damped sinusoid:
0 > e!) > sin = >
(4.3.10)
This means that for any impulse disturbance the system reacts with a sinusoidal
oscillation (governed by =), exponentially damped (by the rate set by 5 ). Such behavior
is typical for an unconditionally stable system. If we move the poles to the imaginary
axis (5 !) so that s"b 4 and =#b 4 (again, = "), then an impulse excites the
system to a continuous sine wave:
0 > sin = >
(4.3.11)
If we push the poles further to the right side of the = plane, so that ="c !) 4
and =#c !) 4, keeping = ", the slightest impulse disturbance, or even just the
system's own noise, excites an exponentially rising sine wave:
0 > e!) > sin = >
(4.3.12)
The poles on the imaginary axis are characteristic of a sine wave oscillator, in
which we have the active components (amplifiers) set to make up for (and exactly
match) any energy lost in resistive components. The poles on the right side of the
=-plane also result in oscillations, but there the final amplitude is limited by the system
power supply voltages. Because the active components provide much more energy than
the system is capable of dissipating thermally, the top and bottom part of the waveform
will be saturated, thus limiting the energy produced. Since we are interested in the
design of amplifiers and not of oscillators, we shall not use the last two kinds of poles.
Let us return to the Butterworth poles. We want to find the general expression
for 8 poles on the left side of the =-plane. A general expression for a pole =; , derived
from Eq. 4.3.8 is:
=; cos ); sin );
(4.3.13)
where:
" #;
); 1
(4.3.14)
#8
- 4.29 -
P.Stari, E.Margan
(4.3.15)
lie in the left side of the =-plane. If we multiply Eq. 4.3.8 by 4, we rotate it by 1# and
thus for the first 8 poles we achieve the condition expressed in Eq. 4.3.13:
=; sin 1
"#;
"#;
4 cos 1
#8
#8
(4.3.16)
#5 "
#5 "
4 cos 1
#8
#8
(4.3.17)
where 5 is an integer from " to 8. As shown in Fig. 4.3.2 (for 8 "&), all these poles
lie on a semicircle with the radius < " in the left half of the =-plane:
j
s4
s2
j 1
1
j
s3
s2
s1
s1
s1
s1
s1
s2
s2
n=1
s3
n=2
n=3
s3
s4
n=4
s5
n=5
The numerical values of the poles for systems of order 8 ""!, together with
the corresponding angle ), are listed in Table 4.3.1. Obviously, if 8 is even the system
has complex conjugate pole pairs only. If the 8 is odd, one of the poles is real, and in
the normalized presentation its value is =" ==H ". In the non-normalized
form, the value of the real pole is equal to =H . Since this is the radius of the circle on
which all the poles lie, we can calculate the upper half power frequency also from any
pole (for Butterworth poles only!):
(4.3.18)
=H 5"
- 4.30 -
(4.3.19)
P.Stari, E.Margan
1.0
|F ( j)|
0.707
2
10
0.1
0.1
1.0
/ H
10.0
Fig. 4.3.3: Frequency response magnitude of 8th -order system with Butterworth poles, 8 ""!.
(4.3.20)
with = 4 ==H and =i 5i 4 =i (the values of 5i and =i are listed in Table 4.3.1).
By multiplying all the expressions in parentheses, we obtain:
J& =
+!
=& + % = % + $ = $ + # = # + " = + !
(4.3.21)
where:
+% =" =# =$ =% =&
+$ = " = # = " = $ = " = % = " = & = # = $ = # = % = # = & = $ = % = $ = & = % = &
+# = " = # = $ = " = # = % = " = # = & = # = $ = % = # = $ = & = $ = % = &
+" = # = $ = % = & = " = $ = % = & = " = # = % = & = " = # = $ = & = " = # = $ = %
+! = " = # = $ = % = &
(4.3.22)
- 4.31 -
P.Stari, E.Margan
If we use the normalized poles with the numerical values listed in Table 4.3.1 to
calculate the coefficients +! +% , we obtain:
J& =
=&
$.#$'" =%
&.#$'" =$
"
&.#$'" =# $.#$'" = "
(4.3.23)
"
(4.3.24)
The reason why we took particular interest for the function with the normalized
numerical values of the order 8 & is that in Sec. 4.5 we will compare it with the
function having Bessel poles of the same order.
4.3.2 Phase response
The general expression for the phase angle is:
=
=i
8
=
H
: " arctan
5i
i"
(4.3.25)
For an odd number of poles the imaginary part of the first pole =" !. For the
remaining poles or in the case of even 8, we enter the complex conjugate pair
components: =i,i+" 5i 4 =i . The phase response plots are drawn in Fig. 4.3.4. By
comparing it with Fig. 4.1.4 we note that Butterwoth poles result in a much steeper
phase slope near the systems cut off frequency at = =H (which is even more evident
in the envelope delay).
0
1
2
180
( )
[]
360
540
10
720
900
H = 1 /RC
0.1
1.0
/ H
Fig. 4.3.4: Phase angle of 8th -order system with Butterworth poles, 8 ""!.
- 4.32 -
10.0
P.Stari, E.Margan
1
2
3
4
2
4
6
en H
10
8
10
H = 1 /RC
12
14
0.1
1.0
/ H
10.0
Fig. 4.3.5: Envelope delay of 8th -order system with Butterworth poles, 8 ""!.
J =
a"b8 =" =# =8
= =" = =# = =8
(4.3.27)
a"b8 =" =# =8
= = =" = =# = =8
- 4.33 -
(4.3.28)
P.Stari, E.Margan
To obtain the step response in the time domain we use the _" transform:
8
(4.3.29)
It would take too much space to list the complete analytical calculation for
systems with 1 to 10 poles. Some examples can be found in the Appendix 2.3. Here we
shall use the computer routines, which we develop and discuss in detail in Part 6. The
plots for 8 ""! are shown in Fig. 4.3.6.
The plots confirm our expectation that amplifiers with Butterworth poles are not
suitable for pulse amplification. The main advantage of Butterworth poles is the flat
frequency response (MFA) in the passband (evident from the plots in Fig. 4.3.3). For
measuring sinusoidal signals in a wide range of frequencies, i.e., in an electronic
voltmeter, Butterworth poles offer the best solution.
1.2
1.0
gn ( t )
n =1
0.8
0.6
10
0.4
0.2
0.0
T = RC
0
t /T
10
Fig. 4.3.6: Step response of 8th -order system with Butterworth poles, 8 ""!.
- 4.34 -
15
P.Stari, E.Margan
Order 8
5 [rad=]
= [rad=]
"
".!!!!
!.!!!!
!.(!("
!.(!("
")! %&.!!!!
".!!!!
!.&!!!
!.!!!!
!.)''!
")!
")! '!.!!!!
!.*#$*
!.$)#(
!.$)#(
!.*#$*
")! #".&!!!
")! '(.&!!!
&
".!!!!
!.)!*!
!.$!*!
!.!!!!
!.&)()
!.*&""
")!
")! $'.!!!!
")! (#.!!!!
'
!.*'&*
!.(!("
!.#&))
!.#&))
!.(!("
!.*'&*
")! "&.!!!!
")! %&.!!!!
")! (&.!!!!
".!!!!
!.*!"!
!.'#$&
!.###&
!.!!!!
!.%$$*
!.()")
!.*(%*
")!
")! #&.("%$
")! &".%#)'
")! ((."%#*
!.*)!)
!.)$"&
!.&&&'
!."*&"
!."*&"
!.&&&'
!.)$"&
!.*)!)
")! "".#&!!
")! $$.(&!!
")! &'.#&!!
")! ().(&!!
".!!!!
!.*$*(
!.(''!
!.&!!!
!."($'
!.!!!!
!.$%#!
!.'%#)
!.)''!
!.*)%)
")!
")! #!.!!!!
")! %!.!!!!
")! '!.!!!!
")! )!.!!!!
"!
!.*)((
!.)*"!
!.(!("
!.%&%!
!."&'%
!."&'%
!.%&%!
!.(!("
!.)*"!
!.*)((
")! *.!!!!
")! #(.!!!!
")! %&.!!!!
")! '$.!!!!
")! )".!!!!
- 4.35 -
) []
")!
P.Stari, E.Margan
/ H
For the time being we assume that the function E= is real, and consequently it
has no phase shift. At the instant > ! we apply a unit step voltage to the input of the
amplifier (multiply Ea=b by the unit step operator "=). By applying the basic formula
for the _" transform (Part 1, Eq. 1.4.4), the output function in the time domain is the
integral of the sin>> function [Ref. 4.2]:
>
g>
sin >
"
.>
(
1
>
(4.3.31)
_
The normalized plot of this integral is shown in Fig. 4.3.8. Here we have 50% of
the signal amplitude at the instant > !. Also, there is some response for > !, before
we applied any step voltage to the amplifier input, which is impossible. Any physically
realizable amplifier would have some phase shift and an envelope delay, therefore the
step response would be shifted rightwards from the origin However, an infinite phase
shift and delay would be needed in order to have no response for time > !.
1
Fig. 4.3.8: Step response of a network having the ideal frequency response of Fig. 4.3.7.
- 4.36 -
P.Stari, E.Margan
What we would like to know is whether there is any phase response, linear or
not, which the amplifier should have in order to suit Eq. 4.3.30 without any response for
time > !. The answer is negative and it was proved by R.E.A.C. Paley and N. Wiener.
Their criterion is given by an amplitude function [Ref. 4.2]:
_
(
_
log E=
.= _
" =#
(4.3.32)
"=
"=
(4.3.33)
However, such an amplifier would still have a step response very similar to that
in Fig. 4.3.8, except that it would be shifted rightwards and there would be no response
for > !. This is because we have almost entirely (down to %) and suddenly cut the
signal spectrum above =H . The overshoot is approximately * %. We have met a similar
situation in Part 1, Fig.1.2.7.a,b in connection with the square wave when we were
discussing the Gibbs phenomenon [Ref. 4.2].
Some readers may ask themselves why the step response overshoot of some
systems with Butterworth poles in Fig. 4.3.6 exceeds 9%? The reason is the
corresponding non-linear phase response, resulting in a peak in the envelope delay, as
shown in Fig. 4.3.5. This is a characteristic of not just the Butterworth poles, but also of
any pole pattern, e.g., Chebyshev Type I and Elliptic (Cauer) systems, where the
magnitude and phase change more steeply at cutoff.
We shall use Eq. 4.3.32 again when we shall discuss the possibility of obtaining
an ideal Gaussian response of the amplifier.
- 4.37 -
P.Stari, E.Margan
(4.4.1)
sinh = cosh =
"
sinh =
cosh =
"
sinh =
(4.4.2)
=#
=%
='
=)
#x
%x
'x
)x
(4.4.3)
sinh = =
=$
=&
=(
=*
$x
&x
(x
*x
(4.4.4)
$
cosh =
=
&
=
=
"
(4.4.5)
"
"
(
=
"
*
=
sinh =
"& =% %#! =# *%&
&
cosh =
= "!& =$ *%& =
(4.4.6)
Now we put this and Eq. 4.4.4 into Eq. 4.4.2 and perform the suggested division
by sinh =. A normalized expression, where J = " if = ! is obtained by multiplying
the numerator by *%&. With these operations we obtain:
J = e=
*%&
=& "& =% "!& =$ %#! =# *%& = *%&
- 4.39 -
(4.4.7)
P.Stari, E.Margan
"
e=
"
=#
=$
=%
=&
"=
#x
$x
%x
&x
(4.4.8)
=8
+8"
=8"
+!
+# =# +" = +!
(4.4.9)
where the numerical values for the coefficients can be calculated by the equation:
+3
# 8 "x
#83 3x 8 3x
(4.4.10)
N"# 4 =
cosh =
sinh =
4 N"# 4 =
(4.4.11)
where N"# 4 = and 4 N"# 4 = are the spherical Bessel functions [Ref. 4.10, 4.11].
Therefore we name the polynomials having their coefficients expressed by Eq. 4.4.10
Bessel polynomials. Their roots are the poles of Eq. 4.4.9 and we call them Bessel
poles. We have listed the values of Bessel poles for polynomials of order 8 ""! in
Table 4.4.1, along with the corresponding pole angles )3 .
- 4.40 -
P.Stari, E.Margan
We usually express Eq. 4.4.9 in another normalized form which is suitable for
the _" transform:
J =
a "b8 = " = # = $ = 8
= =" = =# = =$ = =8
(4.4.12)
10
9
10
8
7
6
5
4
5
3
{s}
2
1
10
0
{s}
10
15
20
Fig. 4.4.1: Bessel poles for order 8 ""!. Bessel poles lie on a family of ellipses with
one focus at the origin of the complex plane and the other focus on the positive real axis).
The first-order pole is the same as for the Butterworth system and lies on the unit circle.
- 4.41 -
P.Stari, E.Margan
=" = # = $ = 8
= =" = =# = =$ = =8
(4.4.13)
= 4==h
where =h "VG is the non-peaking stage cut off frequency. If we put the numerical
values for poles =i 5i 4 =i and = 4 ==h as suggested, then this formula obtains a
form similar to Part 2, Eq. 2.6.10 (where we had 4 poles only).
The magnitude plots for 8 ""! are shown in Fig. 4.4.2. By comparing this
figure with Fig. 4.3.3, where the frequency response curves for Butterworth poles are
displayed, we note an important difference: for Butterworth poles the upper half power
frequency is always ", regardless of the number of poles. In contrast, for Bessel poles
the upper half power frequency increases with 8.
The reason for the difference is that the derivation of 8 Butterworth poles was
" (for magnitude), whilst the Bessel poles were derived from the
based on #8
condition for a unit envelope delay. This difference prevents any direct comparison of
the bandwidth extension and the rise time improvement between both kinds of poles.
To be able to compare the two types of systems on a fair basis we must normalize the
Bessel poles to the first-order cut off frequency. We do this by recursively multiplying
the poles by a correction factor and calculate the cut off frequency, until a satisfactory
approximation is reached (see Sec. 4.4.6). Also, a special set of Bessel poles is derived
in Sec. 4.5, allowing us to interpolate between Bessel and Butterworth poles. The
BESTAP algorithm (in Part 6) calculates the Bessel poles in any of the three options.
2.0
1.0
| F ( j )|
0.707
10
h = 1 /RC
0.1
0.1
1.0
/ h
10.0
Fig. 4.4.2: Frequency-response magnitude of systems with Bessel poles for order 8 " "!.
- 4.42 -
P.Stari, E.Margan
"
"!!
#
"$'
$
"(&
%
#"#
&
#%#
'
#(!
(
#*&
)
$")
*
$$*
"!
$&*
(4.4.14)
Fig. 4.4.3 shows the phase plots of Eq. 4.4.14 for Bessel poles for the order
8 ""! (owing to the cutoff frequency increasing with order 8, the frequency scale
had to be extended to see the asymptotic values at high frequencies).
So far we have used a logarithmic frequency scale for our phase response plots.
However, by using a linear frequency scale, as in Fig. 4.4.4, the plots show that the
phase response for Bessel poles is linear up to a certain frequency [Ref. 4.10], which
increases with an increased order 8.
- 4.43 -
P.Stari, E.Margan
0
1
2
180
( )
[]
360
540
10
720
h = 1 /RC
900
0.1
1.0
100.0
10.0
/ h
Fig. 4.4.3: Phase angle of the systems with Bessel poles of order 8 " "!.
( ) 0
[]
100
1
2
200
300
400
500
600
h = 1 /RC
700
800
0
n =
2
10
/ h
12
14
10
16
18
20
Fig. 4.4.4: Phase-angle as in Fig. 4.4.3, but in a linear frequency scale. Note the linear
phase frequency-dependence extending from the origin to progressively higher frequencies.
- 4.44 -
P.Stari, E.Margan
4.4.4 Envelope-delay
Here, too, we take the corresponding formula from Butterworth poles and
replace the frequency normalization =H by =h :
8
5i
7e =h "
i"
5i#
=
=i
=h
(4.4.15)
The envelope delay plots are shown in Fig. 4.4.5. The delay is flat up to a certain
frequency, which increases with increasing order 8. This was our goal when we were
deriving the Bessel poles, starting with Eq. 4.4.1. Therefore the name MFED
(Maximally Flat Envelope Delay) is fully justified by this figure. This property is
essential for pulse amplification. Because pulses contain a broad range of frequency
components, all of them, (or in practice, the most significant ones, i.e., those which are
not attenuated appreciably) should be subject to equal time delay when passing through
the amplifier in order to preserve the pulse shape at the output as much as possible.
0
0.2
en H
0.4
4
10
0.6
0.8
1.0
h = 1 /RC
1.2
0.1
1.0
/ h
10.0
100.0
Fig. 4.4.5: Envelope delay of the systems with Bessel poles for order 8 ""!. Note
the flat unit delay response increasing with system order. This figure demonstrates the
fulfilment of the criterion from which we have started the derivation of MFED.
K=
a "b8 = " = # = $ = 8
= = =" = =# = =$ = =8
- 4.45 -
(4.4.16)
P.Stari, E.Margan
(4.4.17)
By inserting the numerical pole values from Table 4.4.3 for the systems of order
8 ""! we can proceed in the same way as in the examples in Appendix 2.3, but it
would take too much space. Instead, we shall use the routines developed in Part 6 to
generate the plots of Fig. 4.4.6. This diagram is notably different from the step response
plots of Butterworth poles in Fig. 4.3.6. Again, the reason is that for normalized
Butterworth poles the upper half power frequency =H is always one, regardless of the
order 8, consequently the step response always has the same maximum slope, but a
progressively larger delay. The Bessel poles, on the contrary, have progressively steeper
slope, whilst the delay approaches unity. This is also reflected by the improvement in
rise time, listed in Table 4.4.2. Of course, the improvement in rise time is even higher
for peaking circuits using T-coils.
1.2
1.0
gn (t )
0.8
0.6
0.4
n=1
2 3
0.2
0.0
0
0.5
T = RC
10
1.0
1.5
t /T
2.0
2.5
3.0
Fig. 4.4.6: Step response of systems with Bessel poles of order 8 ""!.
Note the 50% amplitude delay approaching unity as the system order increases.
"
"!!
#
"$)
$
"(&
%
#!*
&
#$*
'
#')
(
#*$
)
$")
*
$%"
"!
$'!
From Fig 4.4.2 and 4.4.6 one could make a false conclusion that the upper half
power frequency increases and the rise time decreases if more equal amplifier stages are
cascaded. This is not true, because all the parameters of systems having Bessel poles are
defined with respect to the single stage non-peaking amplifier, where =h "VG .
- 4.46 -
P.Stari, E.Margan
In the case of a system with 8 Bessel poles this would mean chopping the stray
capacitance of a single amplifying stage into smaller capacitances and separating them
by coils to create 8 poles.
Unfortunately there is a limit in practice because each individual amplifier stage
input sees two capacitances: the output capacitance of the previous stage and its own
input capacitance. Therefore, in a single stage we can have at most four poles (either
Bessel, Butterworth or of any other family).
If we use more than one stage, we can assign a small group of staggered poles
from the 8th -group (from either Table 4.3.1 or Table 4.4.3) to each stage, so that the
system as a whole has the poles as specified by the 8th -group chosen. Then no stage by
itself will be optimized, but the amplifier as a whole will be. More details of this
technique are given in Sec. 4.6 and some examples can be found in Part 5 and Part 7.
- 4.47 -
P.Stari, E.Margan
Order 8
5 [rad=]
= [rad=]
"
"!!!!
!!!!!
"&!!!
!)''!
")! $!!!!!
#$###
")$)*
!!!!!
"(&%%
")!
")! %$'&#&
#)*'#
#"!$)
!)'(#
#'&(%
")! "'''*(
")! &"'$#&
&
$'%'(
$$&#!
#$#%(
!!!!!
"(%#(
$(&"!
")!
")! #(%'*'
")! &'*$''
'
%#%)%
$($&(
#&"&*
!)'(&
#'#'$
%%*#(
")! ""&%""
")! $&"!(*
")! '!(&!)
%*(")
%(&)$
%!(!"
#')&(
!!!!!
"($*$
$&"(#
&%#!(
")!
")! #!!()(
")! %!)$"'
")! '$'%$*
&&)(*
&#!%)
%$')$
#)$*!
!)'('
#'"'#
%%"%%
'$&$*
")! ))#&(
")! #'')'"
")! %&$!""
")! '&*#%&
'#*(!
'"#*%
&'!%%
%'$)%
#*(*$
!!!!!
"($()
$%*)#
&$"($
(#*"&
")!
")! "&)#*&
")! $"*("&
")! %)*!!(
")! '(((&$
"!
'*##!
''"&$
&*'(&
%))'#
$"!)*
!)'((
#'""'
%$)%*
'##&!
)#$#(
")! ("%%(
")! #"&%$!
")! $'$!)&
")! &")(!$
")! '*$""*
- 4.48 -
) []
")!
P.Stari, E.Margan
(4.4.18)
the plot of which is shown in Fig. 4.4.7 for both positive and negative frequencies (and,
to acquire a feeling for Bessel systems, compared to the magnitude of a 5th -order system
with modified Bessel poles).
1.0
0.8
| F ( j ) |
h = 1/RC
B5a G
0.6
0.4
0.2
G B5a
B5a
0
/ h
Fig. 4.4.7: Ideal Gaussian (MFED) frequency response K (real only, with no phase shift),
compared to the magnitude of a 5th -order modified Bessel system F&+ (identical cutoff
asymptote, Table 4.5.1). The frequency scale is two-sided, linear, and normalized to
=h "VG of the first-order system, which is shown as the reference V.
By examining Eq. 4.4.9 and Eq. 4.4.18 we come to the conclusion that it is
possible to approximate the Gaussian response with any desired accuracy up to a
certain frequency. At higher frequencies, the Gaussian response falls faster than the
approximated response. This is brought into evidence in Fig. 4.4.8 where the same
responses are ploted in loglog scale.
By applying a unit step at the instant > ! to the input of the hypothetical
amplifier having a Gaussian frequency response the resulting step response is equal to
the so called error-function, which is defined as the time integral of the exponential
function of time squared [Ref. 4.2]:
>"
"
># % .>
gG a>b erfa>b
( e
# 1
_
- 4.49 -
(4.4.19)
P.Stari, E.Margan
10
h = 1/ RC
10
| F ( ) |
10
10
10
B5a
10
101
10 0
/ h
10
Fig. 4.4.8: Frequency response in loglog scale brings into evidence how the ideal Gaussian
response K decreases much more steeply with frequency than the 5th -order Bessel response
F&+. The Bessel system would have to be of infinitely high order to match the Gaussian
response.
B5a
1.0
0.8
g (t )
0.6
0.4
0.2
g
B5a
0
5
TH =
0
t / TH
2
h
Fig. 4.4.9: Step response of a hypothetical system, gK , having the ideal Gaussian frequency
response with no phase shift, as the one in Fig. 4.4.7 and 4.4.8. Compare it with the step
response of a 5th -order Bessel system, gF&+ , with modified Bessel poles, Table 4.5.1, and
envelope delay compensated (7e $*% XH ) for minimal difference in the half amplitude
region.
- 4.50 -
P.Stari, E.Margan
The plot of Eq. 4.4.21 calculated by the Simpson method is shown in Fig. 4.4.9.
The step response is symmetrical, without any overshoot. However, here too, we have a
response for > ! as it was in the ideal MFA amplifier.
If our hypothetical amplifier were to have any linear phase delay the curve gK in
Fig. 4.4.9 would be shifted rightwards from the origin, but an infinite phase shift would
be required in order to have no response for time > ! (the same as for Fig. 4.3.8).
By looking back to Eq. 4.4.7, we realize that we would need an infinite number
of terms in the denominator ( an infinite number of poles) in order to justify an
sign instead of an approximation ( ). This would mean an infinite number of system
components and amplifying stages, and therefore the conclusion is that we can not
make an amplifier with an ideal Gaussian response (but we can come very close).
A proof, based on the PaleyWiener Criterion, can be carried out in the
following way: if we compare the step response in Fig. 4.4.9 with the step response of a
non-peaking multi-stage amplifier in Fig. 4.1.7, for 8 "!, there is a great similarity.
Therefore we can ask ourselves if a Gaussian response could be achieved by increasing
the number of stages to some arbitrarily large number (8 p _). By doing so, the phase
response diverges when 8 p _ and it becomes infinite if = p _. Therefore for both
reasons (infinite number of stages and divergent phase response) it is not possible to
make an amplifier with an ideal Gaussian response.
4.4.6 Bessel Poles Normalized to Identical Cutoff Frequency
Because the Bessel poles are derived from the requirement for an identical
envelope delay there is no simple way of renormalizing them back to the same cut off
frequency. However, such a renormalization would be very useful, not only for
comparing the systems with different pole families and equal order, but also for
comparing systems of different order within the Bessel family itself.
What is difficult to do analytically is often easily done numerically, especially if
the actual number crunching is executed by a machine. The normalization procedure
goes by taking the original Bessel poles and finding the system magnitude by Eq. 4.4.13
at the unit frequency (==h "). We obtain an attenuation value, say, lJ "l + and
we want lJ "l to be "# . The ratio ; "+# is the correction factor by which
we multiply all the poles and insert the new poles again into Eq. 4.4.13. We keep
repeating this procedure until l; "# l &, with & being an arbitrarily small error;
for practical purposes, a value of & !!!" is adequate. In the algorithm presented in
Part 6, this tolerance is reached in only 6 to 9 iterations, depending on system order.
The following graphs were made using the computer algorithms presented in
Part 6 and show the performance of cut off frequency normalized Bessel systems of
order 8 ""!, as in the previous figures.
Fig. 4.4.10 shows the frequency response magnitude; the plots for 8 &* are
missing, since the difference is too small to identify them on such a vertical scale (the
difference in high frequency attenuation becomes significant with higher magnitude
resolution, say, down to 0.001 or more). Fig. 4.4.11 shows the phase, Fig. 4.4.12 the
envelope delay and Fig. 4.4.13 shows the step response. Finally, in Table 4.4.4 we
report the values of Bessel poles and their respective angles for systems with equal cut
off frequency.
- 4.51 -
P.Stari, E.Margan
2.0
H = h = 1 / RC
1.0
|F ( j)|
1
2
3
0.1
10
0.1
1.0
/ H
10.0
Fig. 4.4.10: Frequency response magnitude of systems with normalized Bessel poles of order
8 ", #, $, %, and "!. Note the nearly identical passband response this is the reason why we
can approximate the oscilloscope (multi-stage) amplifier rise time from the cut off frequency,
using the relation for the first-order system: 7< !$&0T .
1
2
3
180
( )
[]
360
10
540
720
900
H = h = 1 / RC
0.1
1.0
/ H
Fig. 4.4.11: Phase angle of systems with normalized Bessel poles of order 8 ""!.
- 4.52 -
10.0
P.Stari, E.Margan
H = h = 1 / RC
0.5
1.0
2
1.5
en H
2.0
2.5
3.0
3.5
10
4.0
0.1
1.0
/ H
10.0
Fig. 4.4.12: Envelope delay of systems with normalized Bessel poles of the order 8 ""!.
Although the bandwidth is the same, the delay flatness extends progressively with system
order, already reaching beyond the system cut off frequency for 8 &.
1.2
T H = T h = RC
1.0
gn ( t )
0.8
n=1
0.6
3
10
0.4
0.2
0.0
t /T H
Fig. 4.4.13: Step response of systems with normalized Bessel poles of order 8 ""!.
Note the half amplitude slope being almost equal for all systems, indicating an equal
system cut off frequency.
- 4.53 -
P.Stari, E.Margan
Order 8
5 [rad=]
= [rad=]
"
"!!!!
!!!!!
""!"(
!'$'!
")! $!!!!!
"$##(
"!%(&
!!!!!
!***$
")!
")! %$'&#&
"$(!"
!**&#
!%"!$
"#&("
")! "'''*(
")! &"'$#&
&
"&!#%
"$)"!
!*&()
!!!!!
!(")!
"%("$
")!
")! #(%'*'
")! &'*$''
'
"&("'
"$)"*
!*$!(
!$#!*
!*("&
"''"*
")! ""&%""
")! $&"!(*
")! '!(&!)
"')%&
"'"##
"$(*!
!*!**
!!!!!
!&)*$
""*"(
")$''
")!
")! #!!()(
")! %!)$"'
")! '$'%$*
"(&(&
"'$(!
"$($*
!)*#*
!#(#*
!)##)
"$))%
"**)%
")! ))#&(
")! #'')'"
")! %&$!""
")! '&*#%&
")&'(
")!(#
"'&#&
"$'('
!)()%
!!!!!
!&"#%
"!$"%
"&'()
#"%**
")!
")! "&)#*&
")! $"*("&
")! %)*!!(
")! '(((&$
"!
"*#((
")%#$
"''"*
"$'!)
!)'&)
!#%"'
!(#($
"##"#
"($$'
##*#(
")! ("%%(
")! #"&%$!
")! $'$!)&
")! &")(!$
")! '*$""*
- 4.54 -
) []
")!
P.Stari, E.Margan
(4.5.1)
5"
+!
= 8 +8" =8" +# =# +" = +!
(4.5.2)
+! a"b8 $=5
(4.5.3)
5"
"
" 8
+8" 8"
+# #
+"
=
=
=
="
+!
+o
+!
+!
(4.5.4)
=8
+!
or
=
"8
+!
(4.5.5)
"
=8 ,8" =8" ,# =# ," = "
+"
," ""8
+!
(4.5.6)
(4.5.7)
- 4.55 -
P.Stari, E.Margan
The coefficients +5 for the Bessel polynomials are calculated by Eq. 4.4.10.
Then the coefficients ,5 are those of the modified Bessel polynomial, from which we
can calculate the modified Bessel poles of J a=b for the order 8 ""!. These poles are
listed together with the corresponding pole radii < and pole angles ) in Table 4.5.1.
4.5.2 Pole Interpolation Procedure
At the time of the German mathematician Friedrich Wilhelm Bessel
(17841846), there were no electronic filters and no wideband amplifiers, to which the
roots of his polynomials could be applied. W.A. Thomson [Ref. 4.9] was the first to use
them and he also derived the expressions required for MEFD network synthesis.
Therefore some engineers use the name Thomson poles or, perhaps more correctly,
BesselThomson poles.
In the following discussion we shall interpolate between Butterworth and the
modified Bessel poles. If we were to label the poles by initials only, a confusion would
result in the graphs and formulae. Therefore, to label the modified Bessel poles, we
shall use the subscript T in honor of W.A. Thomson.
The procedure of pole interpolation can be explained with the aid of Fig. 4.5.1.
j
sT
sI
sB
rT
rI
rB
T
I
B
R=1
Fig. 4.5.1: Pole interpolation procedure. Butterworth (index B) and Bessel poles
(index T) are expressed in polar coordinates, =a<,)b. The trajectory going through
both poles is the interpolation path required to obtain the transitional pole =I .
We first express the poles in polar coordinates with the well known conversion:
<5 55# =5#
and
)5 1 arctan
(4.5.8)
=5
55
(4.5.9)
Here the 1 radians added are required because the arctangent function repeats with a
period of 1 radians, so it does not distinguish between the poles in quadrant III form
those in I, and the same is true for quadrants IV and II.
- 4.56 -
P.Stari, E.Margan
(4.5.10)
In Eq. 4.5.3 the coefficient +! is equal to the product of all poles. Because we
have divided the polynomial coefficients +5 by +! to obtain the coefficients ,5 we have
effectively normalized the product of all poles to one:
8
# =5 "
5"
(4.5.11)
and this is now true for both Butterworth and the modified Bessel poles. Therefore we
can assume that there exists a trajectory going through the 5 th Butterworth pole =B5 and
the 5th Bessel pole =T5 and each point on this trajectory can represent a pols =I5 which
can be expressed as:
=I5 <I5 e4)I5
(4.5.12)
such that the absolute product of all interpolated poles =I is kept equal to one. Then:
<I5 <T75
(4.5.13)
(4.5.14)
and:
- 4.57 -
P.Stari, E.Margan
5 [rad=]
= [rad=]
<
"
"!!!!
!!!!!
"!!!!
")!
!)''!
!&!!!
"!!!!
")! $!!!!!
!*%"'
!(%&'
!!!!!
!(""%
!*%"'
"!$!&
")!
")! %$'&#&
!*!%)
!'&(#
!#(!*
!)$!#
!*%%%
"!&))
")! "'''*(
")! &"'$#&
&
!*#'%
!)&"'
!&*!'
!!!!!
!%%#(
!*!(#
!*#%'
!*&*)
"!)#&
")!
")! #(%'*'
")! &'*$''
'
!*!*%
!(**(
!&$)'
!")&(
!&'##
!*'"(
!*#)#
!*((&
""!##
")! ""&%""
")! $&"!(*
")! '!(&!)
!*"*&
!))!!
!(&#(
!%*'(
!!!!!
!$#"(
!'&!&
"!!#&
!*"*&
!*$'*
!**%)
"""))
")!
")! #!!()(
")! %!)$"'
")! '$'%$*
!*!*(
!)%($
!("""
!%'##
!"%"#
!%#&*
!(")(
"!$%%
!*#!'
!*%)$
"!""!
""$#*
")! ))#&(
")! #'')'"
")! %&$!""
")! '&*#%&
!*"&&
!)*""
!)"%)
!'(%%
!%$$"
!!!!!
!#&#(
!&!)'
!(($"
"!'!"
!*"&&
!*#'#
!*'!&
"!#&*
""%&"
")!
")! "&)#*&
")! $"*("&
")! %)*!!(
")! '(((&$
"!
!*!*"
!)'))
!()$)
!'%")
!%!)$
!""%!
!$%$!
!&(&*
!)"('
"!)"$
!*"'#
!*$%"
!*(#'
"!$*%
""&&)
")! ("%%(
")! #"&%$!
")! $'$!)&
")! &")(!$
")! '*$""*
- 4.58 -
) []
P.Stari, E.Margan
)"B ")!
)#B "#!
)$B "#!
<"B "
<#B "
<$B "
(4.5.15)
and in Table 4.5.1, order 8 $, we find these values for modified Bessel poles:
="T :
=#T :
=$T :
<"T !*%"'
<#T "!$!&
<$T "!$!&
)"T ")!
)#T "$'$&
)$T "$'$&
(4.5.16)
(4.5.17)
(4.5.18)
"
a5"
=b# 5##
a= =# b# 5## a= =# b#
(4.5.19)
"
a!*(!% =b
#
!'#(%#
The magnitude plot is shown in Fig. 4.5.2 (TBT); for comparison, the magnitude
plots with Butterworth poles and modified Bessel poles are also drawn.
The normalized phase response is calculated as:
: arctan
arctan
=
= =#
= =#
arctan
arctan
5"
5#
5#
=
= !(*)"
= !(*)"
arctan
arctan
!*(!%
!'#(&
!'#(&
(4.5.20)
In Fig. 4.5.3 the phase plot of the transitional (TBT) system, together with the
plots for Butterworth and modified Bessel systems are drawn.
- 4.59 -
P.Stari, E.Margan
TBT
1.0
T
| F ( ) |
0.1
0.1
1.0
/ h
10
TBT
90
[ ]
180
270
0.1
1.0
/ h
Fig. 4.5.3: Phase angle of the Transitional BesselButterworth three-pole
system (TBT), along with the Bessel (T) and Butterworth (B) phase.
- 4.60 -
10
P.Stari, E.Margan
51
5#
5#
#
#
51# =#
5# a= =# b#
5# a= =# b#
(4.5.21)
!*(!%
!'#(&
!'#(&
!*(!%# =#
!'#(&# = !(*)"#
!'#(&# a= !(*)"b#
The envelope delay plot is shown in Fig. 4.5.4 (TBT), along with the delays for the
Butterworth and the modified Bessel system.
0
e h
TBT
T
0.1
1.0
/ h
10
The starting point for the step response calculation is the general formula for a
three pole function multiplied by the unit step operator "=:
K=
=" = # = $
= = =" = =# = =$
(4.5.22)
We calculate the corresponding step response in the time domain by the _" transform:
8
" res3
3"
=" =# =$ e=>
= = =" = =# = =$
- 4.61 -
(4.5.23)
P.Stari, E.Margan
After the sum of the residues is calculated, we insert the poles =" 5" and
=#,$ 5# 4 =# to obtain (see Appendix 2.3):
#
g> "
a5## =## b
e5" >
a5# 5" b# =##
(4.5.24)
=# a#5# 5" b
5# a5# 5" b =##
(4.5.25)
By inserting the numerical values for poles from Eq. 4.5.18, we arrive at the
final relation:
g> " "%#"" e'#(& > sin!(*)" > #))"" "$''! e!*(!% >
(4.5.26)
The plot based on this formula is shown in Fig. 4.5.5, (TBT). By inserting the
appropriate pole values in Eq. 4.5.24 we obtain the plots of Butterworth (B) and
modified Bessel (T) systems step responses.
1.2
B
TBT
1.0
g (t )
0.8
0.6
0.4
0.2
0
t /T
10
Fig. 4.5.5: The step response of the Transitional BesselButterworth three-pole system
(TBT), along with the Bessel (T) and Butterworth (B) responses.
- 4.62 -
P.Stari, E.Margan
=" =# =8
= =" = =# = =8
(4.6.1)
a=" =# b8#
= =" 8# = =# 8#
(4.6.2)
and:
Jr =
where 8 is an even integer (#, %, ', ). For a fair comparison we must use the poles
from Table 4.4.4, the Bessel poles normalized to the same cutoff frequency.
The plots in Fig. 4.6.1 of these two functions were made by a computer, using
the numerical methods described in Part 6. From this figure it is evident that an
amplifier with staggered poles (as reported in the Table 4.4.4 for each 8) preserves the
intended bandwidth. On the other hand, the amplifier with the same total number of
poles, but of second-order, repeated (8#)-times, does not its bandwidth shrinks with
each additional second-order stage. Obviously, if 8 # the systems are identical.
10
f
1
| F ()|
10
10
10
g
h
i
10
b
ed
0
10
/ h
10
Fig. 4.6.1: Frequency response magnitude of systems with staggered poles, compared with
systems with repeated second-order pole pairs. The bandwidth of systems with repeated
poles decreases with each additional stage.
Even if the poles were of a different kind, e.g., Butterworth or Chebyshev poles,
the staggered poles would also preserve the bandwidth, but the system with repeated
second-order pole pairs will not. For the same total number of poles the curves tend to
- 4.63 -
P.Stari, E.Margan
the same cut off asymptote (from Fig. 4.6.1 this not evident, but it would have been
clear if the graphs had been plotted with increased vertical scale, say, down to 106 ).
In the time domain the decrease of rise times is even more evident. To compare
the step responses we take Eq. 4.6.1 and 4.6.2 (without the magnitude sign) and
multiply them by the unit-step operator "=, obtaining:
=" = # = 8
Ks =
(4.6.3)
= = =" = =# = =8
and:
a=" =# b8#
(4.6.4)
Kr =
= = =" 8# = =# 8#
with 8 being again an even integer.
By using the _" transform, we obtain the step responses in the time domain:
gs > _" Ks = " res
=" =# =8 e=>
= = =" = =# = =8
(4.6.5)
(4.6.6)
and:
gr > _" Kr = " res
0.8
8 10
0.4
2
5
2
4
0.6
0.2
0
0
t /T
10
Fig. 4.6.2: Step response of systems with staggered poles, compared with systems with repeated secondorder pole pairs. The rise times of systems with repeated poles increase with each additional stage.
- 4.64 -
P.Stari, E.Margan
=" e=>
" =" >
"
e
= a= =" b
="
g# > ! res
=# =$ e=>
"
"
e5# > sin=# > )#
= a= =# ba= =$ b
lsin )# l
g$ > ! res
=% =& e=>
"
"
e5% > sin=% > )%
= a= =% ba= =& b
lsin )% l
- 4.65 -
P.Stari, E.Margan
In the reverse pole order case the first stage, with the pole pair =%,& , is excited by
the unit step input signal. We know that the second-order system from Table 4.4.3 has
an optimal step response; since the imaginary to real part ratio of =%,& is larger
(their tan )% is greater) than it is with the poles of the optimal case, we thus expect that
the stage with =%,& would exhibit a pronounced overshoot.
In Fig. 4.6.3 we have drawn the response of each stage when it is driven
individually by the unit step input signal (the responses are gain-normalized to allow
comparison). It is evident that the stage with poles =%& has a 13% overshoot.
1.2
4,5
Normalized gain
1.0
0.8
s4,5
4,5
s 2,3
2,3
s1
2,3
0.6
i
0.4
0.2
0
0.5
1.5
t /T
2.5
Fig. 4.6.3: Step response of each of the three stages taken individually.
1.2
Normalized gain
1.0
0.8
4,5
2,3
0.6
i
0.4
s4,5
4,5
s 2,3
2,3
s1
0.2
0.5
1.5
t/T
2.5
Fig. 4.6.4: Step response of the complete amplifier with reverse pole order at each stage.
Although the second stage had no overshoot of its own, it overshoots by nearly 5% when
processing the @%,& output.
- 4.66 -
P.Stari, E.Margan
In Fig. 4.6.4 the step response of the complete amplifier is drawn, showing the
signal after each stage. Note that although the second stage exhibits no overshoot when
driven by the unit step (Fig. 4.6.3), it will overshoot by nearly 5% when driven by the
output of the first stage, @%,& . And the overshoot would be even higher if we were to put
the =" stage in the middle.
The dynamic range of the input stage will therefore have to be larger by 13%
and that of the second stage by 5% in order to handle the signal linearly. Fortunately the
maximal input signal is equal to the maximal output, divided by the total gain. If we
have followed the rule given by Eq. 4.1.33 and Fig. 4.1.9 the input stage will have only
"$ of the total system gain, so its output amplitude will be only a fraction of the supply
voltage. On the other hand, the optimal stage gain is rather low, as given by Eq. 4.1.38,
so the dynamic range may become a matter of concern after all.
The circuit configuration which is most critical in this respect is the cascode
amplifier, since there are two transistors effectively in series with the power supply, so
the biasing must be carefully chosen. In traditional discrete circuits, with relatively high
supply voltages, the dynamic range was rarely a problem; the major concern was about
poor linearity for large signals, since no feedback was used. In modern ICs, with lots of
feedback and a supply of 5V or just 3V, the usable dynamic range can be critical.
We can easily prevent this limitation if we use the correct pole ordering, so that
the first stage has the real pole =" and the last stage the pole pair =%,& . As we can see in
Fig. 4.6.5, the situation improves considerably, since in this case the two front stages
exhibit no overshoot, while the output overshoot is 0.4% only.
In a real amplifier, the pole assignment chosen can be affected by other factors,
e.g., the stage with the largest capacitance will require the poles with the lowest
imaginary part; alternatively a lower loading resistor and thus lower gain can be chosen
for that stage.
1.2
Normalized gain
1.0
0.8
1
4,5
2,3
0.6
i
0.4
s1
s 2,3
2,3
s4,5
4,5
0.2
0
0.5
1.5
t /T
2.5
Fig. 4.6.5: Step response of the complete amplifier, but with the correct pole assignment.
- 4.67 -
P.Stari, E.Margan
Rsum of Part 4
The study of this part should have given the reader enough knowledge to acquire
an idea of how multi-stage amplifiers could be optimized by applying inductive peaking
circuits, discussed in Part 2, at each stage.
Also, the merit of using DC over AC coupled multi-stage amplifiers should be
clearly understood.
A proper pole pattern selection is of fundamental importance for the amplifiers
performance. In particular, for a smooth, low overshoot transient response the envelope
delay extended flatness offered by the Bessel poles provides a nearly ideal solution,
approaching the ideal Gaussian response very quickly: with only 5 poles, the systems
frequency and step response conform exceptionally well to the ideal, with a difference
of less than 1% throughout most of the transient.
Finally, the advantage of staggered vs. repeated pole pairs should be strictly
considered in the design of multi-stage amplifiers when gain bandwidth efficiency is
the primary design goal. We hope that the reader has gained awareness of how the
bandwidth, which has been achieved by hard work following the optimizations of each
basic amplifying stage given in Part 3, can be easily lost by a large factor if the stages
are not coupled correctly.
A few examples of how these principles are used in practice are given in Part 5.
- 4.69 -
P.Stari, E.Margan
References:
[4.1]
[4.2]
[4.3]
[4.4]
[4.5]
[4.6]
[4.7]
W. Gellert, H. Kustner,
M. Hellwich, H. Kastner
, The VNR Concise Encyclopedia of
Mathematics, second edition, Van Nostrand Reinhold Company, New York, 1992.
[4.8]
[4.9]
[4.10]
[4.11]
[4.12]
[4.13]
[4.14]
[4.15]
[4.16]
[4.17]
- 4.71 -
P. Stari, E. Margan:
Wideband Amplifiers
Part 5:
P.Stari, E.Margan
- 5.3 -
P.Stari, E.Margan
List of Figures:
Fig. 5.1.1: A two-stage differential cascode 7pole amplifier ........................................................... 5.10
Fig. 5.1.2: The 7 normalized BesselThomson poles and their characteristic circles ....................... 5.11
Fig. 5.1.3: The 3pole stage realization ............................................................................................ 5.14
Fig. 5.1.4: The 4pole L+T-coil stage ............................................................................................... 5.15
Fig. 5.1.5: Polar plot of the actual 7 poles and the 2 real poles ......................................................... 5.18
Fig. 5.1.6: Frequency response comparison ...................................................................................... 5.19
Fig. 5.1.7: Step response comparison ................................................................................................ 5.19
Fig. 5.1.8: Two pairs of gain normalized step responses ................................................................... 5.20
Fig. 5.1.9: Two pairs of step responses with different gain ............................................................... 5.21
Fig. 5.1.10: Two pairs of step responses with similar gain ................................................................ 5.21
Fig. 5.1.11: The influence of the real pole on the step response ........................................................ 5.23
Fig. 5.1.12: Making the deflection plates in sections ........................................................................ 5.24
Fig. 5.2.1: A typical conventional oscilloscope input section ............................................................. 5.26
Fig. 5.2.2: The 10:1 attenuator frequency compensation ................................................................... 5.26
Fig. 5.2.3: Attenuator frequency response (magnitude, phase and envelope delay) .......................... 5.31
Fig. 5.2.4: Attenuator step response .................................................................................................. 5.33
Fig. 5.2.5: Compensating the 50 H signal source impedance ............................................................ 5.34
Fig. 5.2.6: Switching the three attenuation paths (1:1, 10:1, 100:1) .................................................. 5.35
Fig. 5.2.7: Attenuator with no direct path .......................................................................................... 5.36
Fig. 5.2.8: Inductances owed to circuit loops .................................................................................... 5.37
Fig. 5.2.9: The attenuator and the JFET source follower inductance loop damping .......................... 5.40
Fig. 5.2.10: Step response of the circuit in Fig. 5.2.9 ........................................................................ 5.41
Fig. 5.2.11: The Hook effect ........................................................................................................... 5.42
Fig. 5.2.12: Canceling the Hook effect in common PCB material ..................................................... 5.42
Fig. 5.2.13: Simple offset trimming of a JFET source follower ........................................................ 5.43
Fig. 5.2.14: Active DC correction loop ............................................................................................. 5.44
Fig. 5.2.15: The principle of separate low pass and high pass amplifiers .......................................... 5.45
Fig. 5.2.16: Elimination of the high pass amplifier ........................................................................... 5.46
Fig. 5.2.17: The error amplifier becomes an inverting integrator ...................................................... 5.46
Fig. 5.2.18: High amplitude step response non-linearity ................................................................... 5.47
Fig. 5.2.19: A typical n-channel JFET structure cross-section and circuit model .............................. 5.48
Fig. 5.2.20: A typical n-channel MOSFET cross-section and circuit model ..................................... 5.50
Fig. 5.2.21: Input protection from electrostatic discharge ................................................................. 5.52
Fig. 5.2.22: Input protection for long term overdrive ........................................................................ 5.52
Fig. 5.2.23: JFET source follower with a buffer ................................................................................ 5.54
Fig. 5.2.24: The low impedance attenuator ....................................................................................... 5.55
Fig. 5.3.1: The classical opamp, simplified circuit ............................................................................ 5.58
Fig. 5.3.2: Typical opamp open loop gain and phase compared to the closed loop gain ................... 5.60
Fig. 5.3.3: Slew rate in an opamp with a current mirror .................................................................... 5.62
Fig. 5.3.4: A current feedback amplifier model derived from a voltage feedback opamp ................. 5.62
Fig. 5.3.5: A fully complementary current feedback amplifier model ............................................... 5.63
Fig. 5.3.6: Cross-section of the Complementary Bipolar Process ..................................................... 5.64
Fig. 5.3.7: The four components of GT .............................................................................................. 5.64
Fig. 5.3.8: Current feedback model used for analysis ........................................................................ 5.65
Fig. 5.3.9: Current on demand operation during the step response .................................................... 5.68
Fig. 5.3.10: Comparison of gain vs. bandwidth for a conventional and a CF amplifier .................... 5.68
Fig. 5.3.11: Non-zero inverting input resistance ................................................................................ 5.69
Fig. 5.3.12: Actual CFB amplifier bandwidth owed to non-zero inverting input resistance .............. 5.72
Fig. 5.3.13: VFB and CFB amplifiers with capacitive feedback ....................................................... 5.73
Fig. 5.3.14: Noise gain definition ...................................................................................................... 5.73
Fig. 5.3.15: An arbitrary example of the phasemagnitude relationship ........................................... 5.74
Fig. 5.3.16: VFB amplifier noise gain derived .................................................................................. 5.75
Fig. 5.3.17: CFB amplifier noise impedance derived ........................................................................ 5.76
Fig. 5.3.18: CFB amplifier and its noise impedance equivalent ........................................................ 5.77
Fig. 5.3.19: Functionally equivalent circuits using VF and CF amplifiers ........................................ 5.78
Fig. 5.3.20: Single resistor bandwidth adjustment for CFB amps ..................................................... 5.79
- 5.4 -
P.Stari, E.Margan
Fig. 5.3.21:
Fig. 5.3.22:
Fig. 5.3.23:
Fig. 5.3.24:
Fig. 5.3.25:
Fig. 5.3.26:
Fig. 5.3.27:
Fig. 5.3.28:
Fig. 5.3.29:
Fig. 5.3.30:
Fig. 5.3.31:
Fig. 5.3.32:
Fig. 5.3.33:
Frequency and step response of the amplifier in Fig. 5.3.20 .......................................... 5.79
Improved VF amplifier using folded cascode ................................................................ 5.80
Improved VF amplifier derived from a CFB amp .......................................................... 5.81
The Quad Core structure ............................................................................................. 5.82
Output buffer stage with improved current handling ...................................................... 5.83
Capacitive load ads a pole to the feedback loop ............................................................ 5.83
Amplifier stability driving a capacitive load .................................................................. 5.86
Capacitive load compensation by inductance ................................................................. 5.86
Capacitive load compensation by separate feedback paths ............................................ 5.87
Adaptive capacitive load compensation ......................................................................... 5.88
Amplifier with feedback controlled output level clipping .............................................. 5.89
Amplifier with output level clipping using separate supplies ......................................... 5.90
CFB amplifier output level clipping at ^T ..................................................................... 5.90
Fig. 5.4.1: Amplifiers with feedback and feedforward error correction ............................................ 5.94
Fig. 5.4.2: Closed loop frequency response of a feedback amplifier ................................................. 5.95
Fig. 5.4.3: Optimized feedforward amplifier frequency response ..................................................... 5.98
Fig. 5.4.4: Grounded load feedforward amplifier ............................................................................ 5.100
Fig. 5.4.5: Error take off principle ................................................................................................ 5.102
Fig. 5.4.6: Current dumping principle (Quad 405) ....................................................................... 5.103
Fig. 5.4.7: Time delay compensation of the feedforward amplifier ................................................. 5.104
Fig. 5.4.8: Simple differential amplifier .......................................................................................... 5.105
Fig. 5.4.9: DC transfer function of a differential amplifier .............................................................. 5.106
Fig. 5.4.10: Test set up used to compare different amplifier configurations ................................... 5.108
Fig. 5.4.11: Simple differential cascode amplifier used as the reference ......................................... 5.108
Fig. 5.4.12: Frequency response of the reference amplifier ............................................................. 5.109
Fig. 5.4.13: Step response of the reference amplifier ...................................................................... 5.109
Fig. 5.4.14: Improved Darlington .................................................................................................... 5.110
Fig. 5.4.15: Frequency response of the improved Darlington differential amplifier ........................ 5.110
Fig. 5.4.16: Step response of the improved Darlington differential amplifier ................................. 5.110
Fig. 5.4.17: Differential amplifier with feedforward error correction ............................................. 5.111
Fig. 5.4.18: Differential amplifier with double error feedforward ................................................... 5.111
Fig. 5.4.19: Frequency response of the differential amplifier with double feedforward .................. 5.112
Fig. 5.4.20: Step response of the differential amplifier with double feedforward ........................... 5.112
Fig. 5.4.21: Cascomp (compensated cascode) amplifier ................................................................. 5.113
Fig. 5.4.22: Frequency response of the Cascomp amplifier ............................................................. 5.113
Fig. 5.4.23: Step response of the Cascomp amplifier ...................................................................... 5.113
Fig. 5.4.24: Differential cascode amplifier with error feedback ...................................................... 5.114
Fig. 5.4.25: Modified error feedforward with direct error sensing and direct correction ................ 5.114
Fig. 5.4.26: Cascomp evolution with feedback ................................................................................ 5.115
Fig. 5.4.27: Frequency response of the Cascomp evolution ............................................................ 5.115
Fig. 5.4.28: Step response of the Cascomp evolution ...................................................................... 5.115
Fig. 5.4.29: Differential cascode amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.30: Frequency response of the amplifier with output impedance compensator .................. 5.116
Fig. 5.4.31: Step response of the amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.32: Basic wideband amplifier block of the M377 IC ......................................................... 5.118
Fig. 5.4.33: The basic M377 block as a compound transistor ......................................................... 5.119
Fig. 5.4.34: The M377 amplifier with fast overdrive recovery ........................................................ 5.120
Fig. 5.4.35: Simulation of the M377 amplifier frequency response ................................................ 5.121
Fig. 5.4.36: Simulation of the M377 amplifier step response .......................................................... 5.121
Fig. 5.4.37: Gain switching in the M377 amplifier .......................................................................... 5.122
Fig. 5.4.38: Fixed gain with V#V attenuator switching ................................................................ 5.122
Fig. 5.4.39: The Gilbert multiplier development ............................................................................. 5.124
Fig. 5.4.40: DC transfer function of the Gilbert multiplier .............................................................. 5.126
Fig. 5.4.41: The Gilbert multiplier has almost constant bandwidth ................................................. 5.126
Fig. 5.4.42: Another way of developing the Gilbert multiplier ........................................................ 5.127
Fig. 5.4.43: The four quadrant multiplier ........................................................................................ 5.127
Fig. 5.4.44: The two quadrant multiplier used in M377 .................................................................. 5.128
- 5.5 -
P.Stari, E.Margan
List of Tables:
Table 5.1.1: Comparison of component values for different pole assignments ................................. 5.22
Table 5.3.1: Typical production parameters of the Complementary-Bipolar Process ....................... 5.64
- 5.6 -
P.Stari, E.Margan
- 5.7 -
P.Stari, E.Margan
- 5.9 -
P.Stari, E.Margan
other and, as a result, the bandwidth extension factor would suffer. Another possibility
would be to use an additional cascode stage to separate the last two peaking sections,
but another active stage, whilst adding gain, also adds its own problems to be taken
care of. It is, nevertheless, a perfectly valid option.
Let us now take a quick tour of the amplifier schematic, Fig. 5.1.1. We have
two differential cascode stages and two current sources, which set both the transistors
transconductance and the maximum current available to the load resistors, Va and Vb .
This limits the voltage range available to the CRT. Since the circuit is differential the
total gain is a double of each half. The total DC gain is (approximately):
E! #
Va
Vb
Ve" Ve#
(5.1.1)
The values of Ve" and Ve# set the required capacitive bypass, Ge" # and
Ge$ #, to match the transistors time constants. In turn, this sets the input capacitance
at the base of U" and U$ , to which we must add the inevitable Gcb and some strays.
Cbd
L d kd
Q2
Q1
V b2
V c2
Cbb
Ra
Q3
Ca
V b4
Cd
Rs
s
R1
I e1
Q1'
Cc
Rb
L b kb
Cb
R e3
R e1
Ce1
2
Lc
Q4
V c4
V ee
Ce3
2
I e3
X'
X
Y'
V b2
Q3'
Q'2
V b4
Q4'
V c2
V c4
Fig. 5.1.1: A simple 2-stage differential cascode amplifier with a 7-pole peaking system: the
3-pole T-coil inter-stage peaking (between the U# collector and the U$ base) and the 4-pole
L+T-coil output peaking (between the U% collector and the vertical plates of the cathode ray
tube). The schematic was simplified to emphasize the important design aspects see text.
The capacitance Gd should thus consists of, preferably, only the input
capacitance at the base of U$ . If required by the coil tuning, a small capacitance can
be added in parallel, but that would also reduce the bandwidth. Note that the
associated T-coil Pd will have to be designed as an inter-stage peaking, as we have
discussed in Part 3, Sec. 3.6, but we can leave the necessary corrections for the end.
The capacitance Gb , owed almost entirely to the CRT vertical plates, is much
larger than Gd , so we expect that Va and Vb can not be equal. From this it follows that
- 5.10 -
P.Stari, E.Margan
it might be difficult to apply equal gain to each stage in accordance with the principle
explained in Part 4, Eq. 4.1.39. Nevertheless, the difference in gain will not be too
high, as we shall see.
Like any other engineering process, geometrical synthesis also starts from
some external boundary conditions which set the main design goal. In this case it is
the CRTs vertical sensitivity and the available input voltage, from which the total
gain is defined. The next condition is the choice of transistors by which the available
current is defined. Both the CRT and the transistors set the lower limit of the loading
capacitances at various nodes. From these the first circuit component Vb is fixed.
With Vb fixed we arrive at the first free parameter, which can be represented
by several circuit components. However, since we would like to maximize the
bandwidth this parameter should be attributed to one of the capacitances. By
comparing the design equations for the 3-pole T-coil and the 4-pole L+T-coil peaking
networks in Part 2, it can be deduced that Ga , the input capacitance of the 3-pole
section, is the most critical component.
With these boundaries set let us assume the following component values:
Gb "" pF
Ga % pF
Vb $'! H
(5.1.2)
The pole pattern is, in general, another free parameter, but for a smooth,
minimum overshoot transient we must apply the BesselThomson arrangement. As
can be seen in Fig. 5.1.2, each pole (pair) defines a circle going through the pole and
the origin, with the center on the negative real axis.
2
K = 1 for a single real pole
K = 2 for series peaking
s1c
K = 4 for T-coil peaking
K
RdCd
s1d
s1b
K
RbCb
K
RaCa
sa
K
RcCc
s2b
s2c
s2d
2
5
- 5.11 -
P.Stari, E.Margan
The poles in Fig. 5.1.2 bear the index of the associated circuit components and
the reader might wonder why we have chosen precisely that assignment.
In a general case the assignment of a pole (pair) to a particular circuit section
is yet another free design parameter. If we were designing a low frequency filter we
could indeed have chosen an arbitrary assignment (as long as each complex conjugate
pole pair is assigned as a pair, a limitation owed to physics, instead of circuit theory).
If, however, the bandwidth is an issue then we must seek those nodes with the
largest capacitances and apply the poles with the lowest imaginary part to those circuit
sections. This is because the capacitor impedance (which is dominantly imaginary) is
inversely proportional both to the capacitor value and the signal frequency.
In this light the largest capacitance is at the CRT, that is, Gb ; thus the pole pair
with the lowest imaginary part is assigned to the output T-coil section, formed by Pb
and Vb , therefore acquiring the index b, ="b and =#b .
The real pole is the one associated with the 3-pole stage and there it is set by
the loading resistor Va and the input capacitance Ga , becoming =a .
The remaining two pole pairs should be assigned so that the pair with the
larger imaginary part is applied to that peaking network which has a larger bandwidth
improvement factor. Here we must consider that O % for a T-coil, whilst O # for
the series peaking L-section (of the 4-pole L+T-scetion). Clearly the pole pair with the
larger imaginary part should be assigned to the inter-stage T-coil, Pd , thus they are
labeled ="d and =#d . The L-section then receives the remaining pair, ="c and =#c .
We have thus arrived at a solution which seems logical, but in order to be sure
that we have made the right choice we should check other combinations as well. We
are going to do so at the end of the design process.
The poles for the normalized 7th -order BesselThomson system, as taken
either from Part 4, Table 4.4.3, or by using the BESTAP (Part 6) routine, along with
the associated angles, are:
=a 5a
%*(")
)a ")!
=b 5b 4 =b
%(&)$ 4 "($*$
)b ")! #!!()(
=c 5c 4 =c
%!(!" 4 $&"(#
)c ")! %!)$"'
=d 5d 4 =d
#')&( 4 &%#!(
)d ")! '$'%$*
(5.1.3)
So, let us now express the basic design equations by the assigned poles and the
components of the two peaking networks.
For the real pole =a we have the following familiar proportionality:
=a 5a Ha %*(")
"
Va Ga
(5.1.4)
5b
%(&)$
&$*%"
cos# )b
!))#"
- 5.12 -
%
Vb Gb
(5.1.5)
P.Stari, E.Margan
For the L-section of the L+T output network, because the T-coil input
impedance is equal to the loading resistor, we have:
Hc
5c
%!(!"
("!*%
cos# )c
!&(#&
#
V b Gc
(5.1.6)
5d
#')&(
"$'$$$
cos# )d
!"*"(
%
Va Gd
(5.1.7)
From these relations we can calculate the required values of the remaining
capacitances, Gc and Gd . If we divide Eq. 5.1.5 by Eq. 5.1.6, we have the ratio:
Hb
Hc
%
Vb G b
#
Vb G c
# Gc
Gb
(5.1.8)
Gb
Hb
""
&$*%"
%"($! pF
#
Hc
#
("!*%
(5.1.9)
Ha
Hd
"
Va G a
%
Va G d
Gd
% Ga
(5.1.10)
%*(")
Ha
%%
&)$%* pF
Hd
"$'$$$
(5.1.11)
%
Hb
% Va G a
Vb Gb
resulting in:
Va
Vb
Gb
Hb
$'! ""
&$*%"
#')& H
%*(")
%
Ga
Ha
%
%
- 5.13 -
(5.1.13)
P.Stari, E.Margan
We are now ready to calculate the inductances Pb , Pc and Pd . For the two
T-coils we can use the Eq. 2.4.19:
and
(5.1.14)
(5.1.15)
For Pc we use Eq. 2.2.26 to obtain the proportionality factor of the VG constant:
Pc
" tan# )b #
$'!# %"($! "!"#
Vb Gc
!"&$$ H
%
% !))#"
(5.1.16)
The magnetic coupling factors for the two T-coils are calculated by Eq. 2.4.36:
5b
$ tan# )b
$ !"$$'
!&&)%
& tan# )b
& !"$$'
(5.1.17)
$ tan# )d
$ %!($)
!"")$
& tan# )d
& %($)
(5.1.18)
and likewise:
5d
Note that 5d is negative. This means that, instead of the usually negative
mutual inductance, we need a positive inductance at the T-coil tap. This can be
achieved by simply mounting the two halves of Pd perpendicular to each other, in
order to have zero magnetic coupling and then introduce an additional coil, Pe (again
perpendicular to both halves of Pd ), with a value of the required positive mutual
inductance, as can be seen in Fig. 5.1.3. Another possibility would be to wind the two
halves of Pd in opposite direction, but then the bridge capacitance Gbd might be
difficult to realize correctly.
Cbd
L d kd
Cbd
kd = 0
L d /2
L d /2
Ra
Ra
Ca
Cd
Ra
Le
Ca
kd < 0
Cd
sa
s 1d s 2d
Fig. 5.1.3: With the assigned poles and the resulting particular component values the 3-pole
stage magnetic coupling 5d needs to be negative, which forces us to use non-coupled coils
and add a positive mutual inductance Pe . Even with a negative 5d the T-coil reflects its
resistive load to the network input, greatly simplifying the calculations of component values.
- 5.14 -
P.Stari, E.Margan
P P" P # # P M
P" P#
P
# " 5
(5.1.19)
PM 5 P" P#
Thus, if 5 ! we have:
P"d P#d
and:
Pe 5 d
Pd
!%#!'
!#"!$ H
#
#
Pd
Pd
Pd
!%#!'
5d
!"")$
!!#& H
#
#
#
#
(5.1.20)
(5.1.21)
If we were to account for the U3 base resistance (discussed in Part 3, Sec. 3.6)
we would get 5d even more negative and also P"d P#d .
The coupling factor 5b , although positive, also poses a problem: since it is
greater than 0.5 it might be difficult to realize. As can be noted from the above
equations, the value of 5 depends only on the poles angle ). In fact, the 2nd -order
Bessel system has the pole angles of "&!, resulting in a 5 !&, representing the
limiting case of realizability with conventionally wounded coils. Special shapes, coil
overlapping, or other exotic techniques may solve the coupling problem, but, more
often than not, they will also impair the bridge capacitance. The other limiting case,
when 5 !, is reached by the ratio e=d= $ , a situation occurring when
the poles angle ) "#!.
In accordance with previous equations we also calculate the value of the two
halves of Pb :
P"b P#b
Pb
"%#&'
!%&(% H
# a" 5b b
# a" !&&)%b
(5.1.22)
Cbb
L b kb
Lc
Cc
Rb
s 1c s 2c
Rb
Cb
s 1b s 2b
Fig. 5.1.4: The 4-pole output L+T-coil stage and its pole assignment.
The last components to be calculated are the bridge capacitances, Gbb and Gbd .
The relation between the T-coil loading capacitance and the bridge capacitance has
been given already in Part 2, Eq. 2.4.31, from which we obtain the following
expressions for Gbb and Gbd :
- 5.15 -
P.Stari, E.Margan
" tan# )b
" !"$$'
""
!((*$ pF
"'
"'
(5.1.23)
" tan# )d
" %!($)
&)$%*
")&!$ pF
"'
"'
(5.1.24)
Gbb Gb
and:
Gbd Gd
"
"
*$"" "!' rads
Va G a
#')& % "!"#
5bA
% cos# )b
% !))#"
)*"! "!' rads
Vb Gb
$'! "" "!"#
=bA
% cos )b sin )b
% !*$*# !$%$$
$#&( "!' rads
Vb Gb
$'! "" "!"#
5cA
# cos# )c
# !&(#&
('## "!' rads
Vb Gc
$'! %"($! "!"#
=cA
# cos )c sin )c
# !(&'' !'&$)
'&)& "!' rads
Vb Gc
$'! %"($! "!"#
5dA
% cos# )d
% !"*"(
%)*& "!' rads
Va Gd
#')& &)$%* "!"#
=dA
% cos )d sin )d
% !%%$* !)*'"
"!"&' "!' rads
Va Gd
#')& &)$%* "!"#
(5.1.25)
If we divide the real amplifier pole by the real normalized pole, we get:
*$"" "!'
5bA
")($ "!'
%*(")
5b
(5.1.26)
and this factor is equal for all other pole components. Unfortunately, from this we
cannot calculate the upper half power frequency of the amplifier. The only way to do
that (for a Bessel system) is to calculate the response for a range of frequencies around
the cut off and then iterate it using the bisection method, until a satisfactory tolerance
has been achieved.
Instead of doing it for only a small range of frequencies we shall, rather, do it
for a three decade range and compare the resulting response with the one we would
get from a non-compensated amplifier (in which all the inductances are zero). Since to
this point we were not interested in the actual value of the voltage gain, we shall make
the comparison using amplitude normalized responses.
- 5.16 -
P.Stari, E.Margan
"
Va aGa Gd b
and
=# N
"
Vb aGb Gc b
(5.1.27)
="N =#N
a= ="N ba= =#N b
(5.1.28)
="N =#N
a=# =#"N ba=# =##N b
(5.1.29)
"
="N =#N
= a= ="N ba= =#N b
=#N
=" N
e="N >
e=#N >
="N =#N
=" N = # N
(5.1.30)
"
"
#
=#"N
=# N
(5.1.31)
="N =#N
#1
(5.1.32)
and the step response is the inverse Laplace transform of the product of JA a=b with
the unit step operator "=:
ga>b _"
"
"
JA a=b ! res JA a=b e=>
=
=
(5.1.34)
- 5.17 -
P.Stari, E.Margan
relatively simple operation and we leave it as an exercise to the reader. Instead we are
going to use the computer routines, the development of which can be found in Part 6.
In Fig. 5.1.5 we have made a polar plot of the poles for the inductively
compensated 7-pole system and the non-compensated 2-pole system. As we have
learned in Part 1 and Part 2, the farther from origin the smaller is the poles influence
on the system response. It is therefore obvious that the 2-pole systems response will
be dominated by the pole closer to the origin and that is the pole of the output stage,
=#N . The bandwidth of the 7-pole system is, obviously, much larger.
90
120
60
{s }
1.5 10 9
s 1dA
150
1.0
s 1cA
0.5
s 1bA
180
30
s aA
{s }
s 1N s 2N
s 2bA
s 2cA
330
210
s 2dA
240
300
270
Fig. 5.1.5: The polar plot of the 7-pole compensated system (poles
with index A) and the 2-pole non-compensated system (index N).
The radial scale is "!* rads. The angle is in degrees.
- 5.18 -
P.Stari, E.Margan
10
0.707
| FA ( f )|
| F ( f )|
| FN ( f )|
A0
10
10
10
10
10
f [Hz]
Fig. 5.1.6: The gain normalized magnitude vs. frequency of the 7-pole
compensated system |JA a0 b| and the 2-pole non-compensated system, |JN a0 b|.
The bandwidth of JN is about 25 MHz and the bandwidth of JA is about 88 MHz,
more than 3.5 times larger.
1.2
1.0
g (t )
A0
0.8
gA(t )
gN ( t )
0.6
0.4
0.2
10
t [ns]
15
20
25
Fig. 5.1.7: The gain normalized step responses of the 7-pole compensated system
gA a>b and the 2-pole non-compensated system gN a>b. The rise time is 14 ns for the
gN a>b, but only 3.8 ns for gA a>b. The overshoot of gA a>b is only 0.48 %.
- 5.19 -
P.Stari, E.Margan
abcd
abdc
acbd
adbc
0.6
0.4
0.2
0
10
t [ns]
12
14
16
18
20
Fig. 5.1.8: The normalized step responses of the four possible combinations of pole
assignments. There are two pairs of responses, here spaced vertically by a small
offset to allow easier identification. One of the two faster responses (labeled abcd)
is the one for which the detailed analysis has been given in the text.
If the pole pairs =c and =d are mutually exchanged the result is the same as our
original analysis. But by exchanging =b with either =c or =d the result is sub-optimal.
A closer look at Table 5.1.1 reveals that both of the two slower responses have
Va $&% H instead of #') H. The higher value of Va means actually a higher gain, as
can be seen in Fig. 5.1.9, where the original system was set for a gain of E! "!, in
contrast with the higher value, E! "$. The higher gain results from a different
tuning of the 3-pole T-coil stage, in accordance with the different pole assignment.
- 5.20 -
P.Stari, E.Margan
14
12
10
8
abcd
abdc
acbd
adbc
4
2
0
10
t [ns]
12
14
16
18
20
Fig. 5.1.9: The slower responses of Fig. 5.1.8, when plotted with the actual gain,
are actually those with a higher value of Va and therefore a higher gain.
Since our primary design goal is to maximize the bandwidth with a given gain,
let us recalculate the slower system for a lower value of Vb . If Vb $"' H (from the
E96 series of standard values, 0.5 % tolerance), the gain is restored. Fig. 5.1.10 shows
the recalculated responses, labeled acbd and abdc, compared to the response
obtained by the abcd and abdc pole assignment.
12
10
8
Rb = 360 abcd
abdc
acbd
R = 316
adbc b
4
2
0
10
t [ns]
12
14
16
18
20
Fig. 5.1.10: If the high gain responses are recalculated by reducing Vb from the
original 360 H to 316 H, the gain is nearly equal in all four cases, However, those
pole assignments which put the poles with the higher imaginary part at the output
stage still result in a slightly slower system.
- 5.21 -
P.Stari, E.Margan
The difference in rise time between the two pairs is much smaller now;
however, the recalculated pair is still slightly slower. This shows that our initial
assumptions of how to achieve maximum bandwidth (within a given configuration)
were not guessed by sheer luck.
In Table 5.1.1 we have collected all the design parameters for the four out of
six possible pole assignments. The systems in the last two columns have the same
pole assignments as in the middle two, but have been recalculated from a lower Vb
value, in order to obtain the total voltage gain nearly equal to the first system. From a
practical point of view the first and the last column are the most interesting: the
system represented by the first column is the fastest (as the second one, but the latter
is difficult to realize, mainly owing to low Gc value), whilst the last one is only
slightly slower but much easier to realize, mainly owing to a lower magnetic coupling
5b and the non-problematic values of Gc and Gd .
Table 5.1.1
Vb [H]
pole order:
E!
Va H[ ]
Gc [pF]
Gd [pF]
Gbb [pF]
Gbd [pF]
Pb [H]
Pc [H]
Pd [H]
5b
5d
(b
(r
$'!
abcd
*''(
#')&
%"($
&)$)
!((*
")&"
"%#'
!"&$
!%#"
!&&)
!"")
$&(
$&&
$'!
abdc
*''(
#)'&
#"((
"""*
!((*
"###
"%#'
!!)!
!)!(
!&&)
!$*#
$&(
$&&
$'!
acbd
"#(%
$&$*
#)(!
"%(&
"#!"
"!%&
"%#'
!"'#
")%(
!$*#
!&&)
#$&
#$$
$'!
adbc
"#(%
$&$*
(#%*
&)$)
"#!"
")&"
"%#'
!%"!
!($"
!$*#
!"")
#$&
#$$
$"'
acbd
*)"(
$"!(
#)(!
"%(&
"#!"
"!%&
"!*)
!"#&
"%#$
!$*#
!&&)
#)%
#)"
$"'
adbc
*)"(
$"!(
(#%*
&)$)
"#!"
")&"
"!*)
!$"'
!&'$
!$*#
!"")
#)%
#)"
Table 5.1.1: Circuit components for 4 of the 6 possible pole assignments. The last two
columns represent the same pole assignment as the middle two, but have been recalculated
for Vb $"' H and nearly equal gain. The first column is the example calculated in the text
and its response is one of the two fastest. The other fast system (second column) is probably
non realizable (in discrete form), because Gc # pF. The last column (adbc) is, on the other
hand, only slightly slower, but probably much easier to realize (T-coil coupling and the
capacitance values). The bandwidth and rise time improvement factors (b and (r were
calculated by taking the non-compensated amplifier responses as the reference.
The main problem encountered in the realization of our original abcd system
is the relatively high magnetic coupling factor of the output T-coil, 5b . A possible way
of improving this could be by applying a certain amount of emitter peaking to either
the U" or U3 emitter circuit. Then we would have a 9-pole system and we would have
to recalculate everything. However, the use of emitter peaking results in a negative
input impedance which has to be compensated (see Part 3, Sec. 3.5), and the
compensating network adds more stray capacitance.
- 5.22 -
P.Stari, E.Margan
A 9-pole system might be more easily implemented if, instead of the 3-pole
section, we were to use another L+T-coil 4-pole network. The real pole could then be
provided by the signal source resistance and the U" input capacitance, which we have
chosen to neglect so far. With 9 poles both T-coils can be made to accommodate those
two pole pairs with moderate imaginary part values (because the T-coil coupling
factor depends only on the pole angle ) ), so that the system bandwidth could be more
easily maximized. A problem could arise with a low value of some capacitances,
which might become difficult to achieve. But, as is evident from Table 5.1.1, there are
many possible variations (their number increases as the factorial of the number of
poles), so a clever compromise can always be made. Of course, with a known signal
source an additional inductive peaking could be applied at the input, resulting in a
total of 11 or perhaps even 13 poles, but then the component tolerances and the
adjustment precision would set the limits of realizability.
Finally, we would like to verify the initial claim that the input real pole =" ,
owed to the signal source resistance and base spread resistance and the total input
capacitance, can be neglected if it is larger than the system real pole =a . Since the
input pole is separated from the rest of the system by the first cascode stage, it can be
accounted for by simply multiplying the system transfer function by it. In the
frequency response its influence is barely noticeable. In the step response, Fig. 5.1.11,
it affects mostly the envelope delay and the overshoot, while the rise time (in
accordance with the frequency response) remains nearly the same.
10
9
8
7
m = 10 2 1.1
6
5
4
s1 = m sa
3
2
1
0
10
t [ns]
12
14
s1 =
1
( Rs + r b1 ) Cin
sa =
1
R a Ca
16
18
20
Fig. 5.1.11: If the real input pole =" is at least twice as large as the systems real pole
=a , its influence on the step response can be seen merely as an increased envelope
delay and a reduced overshoot, while the rise time remains nearly identical.
- 5.23 -
P.Stari, E.Margan
Rb
d ka
V cc
Va
ld
Vg
Rb
V cc
Fig. 5.1.12: If the CRT deflection plates are made in a number of sections (usually between 4
and 8), connected by a series of T-coil peaking circuits, the amplifier would effectively be
loaded by a much smaller capacitance, allowing the system cutoff frequency to be several times
higher. The T-coils also provide the time delay necessary to keep the deflecting voltage (as seen
by the electrons in the writing beam) almost constant throughout the electrons travel time across
the deflecting field. For simplicity, only the vertical deflection system is shown, but a similar
circuit could be used for the horizontal deflection, too (such an example can be found in the
1 GHz Tektronix 7104 model; see Appendix 5.1 for further details). Note that, owing to the
increasing distance between the plates their length should also vary accordingly, in order to
compensate for the reduced capacitance. Fortunately, the capacitance is also a function of the
plates width, not just length and distance, so a well balanced compromise can always be found.
- 5.24 -
P.Stari, E.Margan
the upper cut off frequency at least twice higher than the systems bandwidth;
the upper cut off frequency should be independent of any of the above settings;
the gain flatness must be kept within 0.5 % from DC to 1/5 of the bandwidth;
the protection diodes must survive repeating 12 A surge currents with < 1 ns
rise and 50 s decay, their leakage must be < 100 pA and capacitance < 1 pF;
This is an impressive list, indeed. Especially if we consider that for a 500 MHz
system bandwidth the above requirements should be fulfilled for a 1 GHz bandwidth.
- 5.25 -
P.Stari, E.Margan
A typical input stage block diagram is shown in Fig. 5.2.1. The attenuator and
the unity gain buffer stage will be analyzed in the following sections.
control
Cp
DC-GND-AC
Coupling
i
BNC
Rt
50
1:1
1M
10:1
15pF 100:1
C AC
33nF
1M
Vcc
1.5nF
Rp
Rd
150k
150
Av =+1
Vee
Attenuator
Ro
50
Unity-Gain
Buffer
Overdrive
protection
Fig. 5.2.1: A typical conventional oscilloscope input section. All the switches must be high
voltage types, controlled either mechanically or as electromagnetic relays (but other
solutions are also possible, as in [Ref. 5.2]). The spark gap protects against electrostatic
discharge. The Vt 50 H resistor is the optional transmission line termination. The 1 MH
resistor in the DCGNDAC selector charges the AC coupling capacitor in the GND
position, reducing the overdrive shock through Gac in presence of a large DC signal
component. The attenuator is analyzed in detail in Sec. 5.2.13. The overdrive protection
limits the input current in case of an accidental connection to the 240 Vac with the attenuator
set to the highest sensitivity. The unity gain bufferimpedance transformer is a > 100 MH
Vin , 50 H Vo JFET or MOSFET source follower, analyzed in Sec. 5.2.4 and 5.2.5.
9R
a)
9R
o = 0.1
Ci
900k
C1
10pF
R2
100k
C2a
82pF
R1
o=
9C
b)
0.1
C2b
10-30pF
c)
Fig. 5.2.2: The 10:1 attenuator; a) resistive: with V "!! kH, the following stage input
capacitance of just 1 pF would limit the bandwidth to only 1.(( MHz; b) compensated: the
capacitive divider takes over at high frequencies but the input capacitance of the following
stage of 1 pF would spoil the division by 1%; c) adjustable: in practice, the capacitive
divider is trimmed for a perfect step response.
- 5.26 -
P.Stari, E.Margan
(5.2.1)
V" V#
V#
(5.2.2)
@i
E
V" V #
(5.2.3)
(5.2.4)
V# "!! kH
and
(5.2.5)
V#
\G#
"
4=G"
"
4=G#
G#
G"
(5.2.6)
(5.2.7)
"
"
"
"
4=G"
4=G#
V"
V#
V"
V#
"
aV" V" b
" 4=G" V"
" 4=G# V#
" 4= 7a
- 5.27 -
(5.2.8)
P.Stari, E.Margan
In the latter expression we have taken into account Eq. 5.2.7. This is the same as if we
would have a single parallel Va Ga network:
^a Va
"
" 4=Ga Va
(5.2.9)
where:
Va V " V #
Ga
and
"
"
"
G"
G#
(5.2.10)
"
"
"
aE "bG"
G"
G"
E"
E
(5.2.11)
"
@out
^#
E
@in
^" ^#
"
V"
" 4=G# V#
"
V#
" 4=G" V"
(5.2.12)
Obviously, the frequency dependence will vanish if the condition of Eq. 5.2.7
is met. However, the transfer function will be independent of frequency only if the
signals source impedance is zero (we are going to see the effects of the signal source
impedance a little later).
The transfer function of an unadjusted attenuator (V" G" V# G# ) has a simple
pole and a simple zero, as can be deduced from Eq. 5.2.12. If we rewrite the
impedances as:
^"
"
"
=G"
V"
V"
"
V" G"
"
=
V" G "
and
^#
"
"
=G#
V#
V#
"
V# G #
"
=
V# G #
V"
="
= ="
(5.2.13)
V#
=#
= =#
(5.2.14)
where =" and =# represent the poles in each impedance arm, explicitly:
="
"
V" G"
=#
and
"
V# G #
(5.2.15)
="
=#
@in
^" ^ #
V"
V#
= ="
= =#
- 5.28 -
(5.2.16)
P.Stari, E.Margan
By solving the double divisions, the transfer function can be rewritten as:
@out
=# V# a= =" b
@in
=" V" a= =# b =# V# a= =" b
(5.2.17)
@in
"
a= =" b
G" a= =" b
G#
"
"
G# a= =# b G" a= =" b
a= =# b
a= =" b
G"
G#
G" a= =" b
G"
a= =" b
=# G # = " G "
=
G" G #
(5.2.18)
This can be simplified by defining a few useful substitutions: the capacitive divider
attenuation:
G"
EG
(5.2.19)
G" G#
the system zero:
"
(5.2.20)
=z = "
V" G "
and the system pole:
=# G # = " G "
=p
(5.2.21)
G" G#
G" G#
G#
"
"
G"
V# G#
V" G "
G" G#
"
"
V#
V"
G" G#
V" V#
"
V" V#
G" G #
(5.2.22)
From the system pole we note that the system time constant is equal to the
parallel connection of all four components.
We will also define the resistance attenuation as:
EV
V#
V" V#
(5.2.23)
=p
V" V#
"
"
"
V#
V" aG" G# b
EV
V" aG" G# b
(5.2.24)
J a=b
@out
= =z
EG
@in
= =p
- 5.29 -
(5.2.25)
P.Stari, E.Margan
4= 5 z
4= 5z
4= 5 p
4 = 5p
(5.2.26)
=# 5z#
=# 5p#
(5.2.27)
The phase angle is the arctangent of the imaginary to real component ratio of
the frequency response J a4=b:
4= 5 z
e
4= 5 p
eeJ a4=bf
:a=b arctan
(5.2.28)
arctan
4= 5 z
deJ a4=bf
d
4= 5 p
First we must rationalize J a4=b by multiplying both the numerator and the
denominator by the complex conjugate of the denominator a 4= 5p b:
=# 4 = 5 z 4 = 5 p 5 z 5 p
a4= 5z ba 4= 5p b
4= 5z
4= 5p
=# 5p#
a4= 5p ba 4= 5p b
and then we separate the real and imaginary part:
=# 4= 5z 4= 5p 5z 5p
=# 5 z 5 p
=a 5 z 5 p b
4
=# 5p#
=# 5p#
=# 5 p
The phase angle is then:
5z 5p
=a 5z 5p b
=
arctan
:a=b arctan #
5z 5p
= 5z 5 p
"
=#
(5.2.29)
BC
" BC
we can write:
:a=b arctan
5p
5z
arctan
=
=
(5.2.30)
7d
5p
.:
.
5z
arctan
arctan
.=
.=
=
=
5p
5z
"
"
#
#
5p #
5z #
=
=
"
"
=
=
- 5.30 -
(5.2.31)
P.Stari, E.Margan
5p
5z
#
=# 5z#
= 5p#
(5.2.32)
C 2 = 80 pF
19.5
C1
10pF
R1
o [dB]
900k
C 2 = 90 pF
20.0
C2
90 10pF
R2
100k
20.5
C 2 = 100 pF
21.0
3.0
C 2 = 80 pF
2.0
1.0
[]
C 2 = 90 pF
0.0
1.0
C 2 = 100 pF
2.0
3.0
1.0
C 2 = 80 pF
0.5
[ s]
C 2 = 90 pF
0.0
0.5
C 2 = 100 pF
1.0
10
100
1k
10k
100k
1M
10M
100M
f [Hz]
Fig. 5.2.3: The attenuator magnitude, phase, and envelope delay responses for the correctly
compensated case (flat lines), along with the under- and over-compensated cases (G2 is trimmed
by 10 pF). Note that these same figures apply also to oscilloscope passive probe compensation,
demonstrating the importance of correct compensation when making single channel pulse
measurements and two channel differential measurements.
- 5.31 -
P.Stari, E.Margan
The step response is obtained from J a=b by the inverse Laplace transform,
using the theory of residues:
==
_"
p
"
EG
= =z
= =z
J a=b
(
e=> .= EG "res
e=>
=
#14
=a= =p b
=
a
= =p b
=!
We have two residues. One is owed to the unit step operator, "=:
= =z
= =z =>
res" EG lim =
e=> EG lim
e
a
b
=
=
=
=
=p
=p!
=p!
p
EG
=z
EG
=p
"
V" G "
"
"
EV
V" aG" G# b
EG EV
V" aG" G# b
G" G #
"
EG EV
EG EV
V" G"
G"
EG
EV
V#
V" V#
(5.2.33)
As expected, the residue for zero frequency (DC) is set by the resistance ratio.
The other residue is due to the system pole, =p :
= =z
= =z =>
res# EG lim a= =p b
e=> EG lim
e
=p=p
=p=p
=a= =p b
=
EG
EG
=p =z =p >
e EG
=p
"
"
"
EV
V" aG" G# b
V" G"
e=p >
"
"
EV
V" aG" G# b
"
"
"
EV
V" aG" G# b
V" G" =p >
e
"
"
EV
V" aG" G# b
EG " EV
G" G#
EV
= >
= >
e p EG "
e p
G"
EG
(5.2.34)
The result is a time decaying exponential, with the time constant set by the
system pole, =p , and the amplitude set by the difference between the capacitive and
resistive divider.
The step response is the sum of both residues:
0 a>b " res EV aEG EV b e=p >
EV aEG EV b e
"
EV
"
V" aG" G# b
>
- 5.32 -
(5.2.35)
P.Stari, E.Margan
V V
V#
G"
V#
" # "
e V"V# G"G#
V" V #
G" G #
V" V #
>
(5.2.36)
V#
V" V#
(5.2.37)
The systems time constant, as we have already seen in Eq. 5.2.24, is:
7a
"
V " V#
aG" G# b
=p
V" V #
(5.2.38)
V" V #
aG" G# b V" G" V# G#
V" V #
(5.2.39)
We have plotted the step response in Fig. 5.2.4. The plots are made for the
matched and two unmatched cases in order to show the influence of trimming the
attenuator by G# (10 pF), as in the frequency domain plots.
0.12
0.10
a
b
c
0.08
0.06
0.04
R1
900k
C1
10pF
R2
C2
0.02
100k
90 10pF
sz
a bc
a) C 2 = 80 pF
b) C 2 = 90 pF
c) C 2 = 100 pF
sp
0.00
0.02
20
40
t [ s]
60
80
Fig. 5.2.4: The attenuators step response for the correctly compensated case, along with
the under- and over-compensated cases (G2 is trimmed by 10 pF). Note that by changing
G# the system pole also changes but the system zero remains the same.
Now we are going to analyze the influence of the non-zero source impedance
on the transfer function. Since we can re-use some of the results we shall not need to
recalculate everything.
The capacitive divider presents a relatively high output capacitance to the
following amplifier, and the amplifier input capacitance appears in parallel with G# ,
changing the division slightly, but that is compensated by trimming.
However, the attenuator input capacitance Ga (Eq. 5.2.10) is smaller than G"
*
(for an attenuation of E "!, the input capacitance is Ga "!
G" ). The actual values
- 5.33 -
P.Stari, E.Margan
of G" and G# are dictated mainly by the need to provide a standard value for various
probes (compensated attenuators themselves, too). Historically, values between 10
and 20 pF have been used for Ga . Although small, this load is still significant if the
signal source internal impedance is considered.
High frequency signal sources are designed to have a standardized impedance
of Vg 50 H (75 H for video systems). The cable connecting any two instruments
must then have a characteristic impedance of ^! 50 H, and it must always be
terminated at its end by an equal impedance in order to prevent signal reflections. As
shown in Fig. 5.2.5a and 5.2.5b, the internal source resistance Vg and the termination
resistance Vt form a 2 attenuator (neglecting the 1 MH of the 10:1 attenuator):
@o
Vt
&! H
"
@g
Vg Vt
&! H &! H
#
(5.2.40)
Therefore the effective signal source impedance seen by the attenuator is:
Vge
Vg Vt
#&!!
#& H
Vg Vt
"!!
(5.2.41)
Rg
50
9R
Rt
50
C
o
9C
a)
R ge
R ge
25
9R
C
o
9R
25
9C
2.78
Rc =
9C
Rge
9
b)
c)
Fig. 5.2.5: a) When working with 50 H impedance the terminating resistance must match
the generator internal resistance, forming a 2 attenuator with an effective output
impedance of 25 H; b) With a 9 pF attenuator input capacitance, a HF cut off at 707 MHz
results; c) The cut off for the 10 attenuator can be compensated by a 25/9 H resistor
between the lower end of the attenuator and the ground.
- 5.34 -
P.Stari, E.Margan
attenuator transfer function for a zero signal source impedance (Eq. 5.2.25), the
transfer function for the impedance Vge #& H is:
"
J" a=b
Vge
#
"
Vge Ga
J! a=b
"
=
Vge Ga
(5.2.42)
15pF
1M
900k
2-8pF
10pF
2
100k
90pF
990k
10pF
Co
1pF
2-8pF
10k
990pF
Fig. 5.2.6: The direct and the two attenuation paths are switched at both input and output,
in order to reduce the input capaciatance. For low cross-talk, the input and output of each
unused section should be grounded (not shown here). The variable capacitors in parallel
with the two attenuation sections are adjusted so that the input capacitance is equal for all
settings. Of course, other values are possible, e.g., 1, 20, 400 (as in Tek 7A11), with the
highest attenuation achieved by cascading two 20 sections. The advantage is that the
parasitic serial inductance of the largest capacitance in the highest attenuation section is
avoided; a disadvantage is that it is very difficult to trim correctly.
- 5.35 -
P.Stari, E.Margan
20pF
1
2-8pF
500k
15-25pF
25
50
Co
1pF
50
950k
10pF
2
2-8pF
50k
180-200pF
1.3
Fig. 5.2.7: The attenuator with no direct path, in which the 25 H effective source impedance
compensation can be used for both settings. Low ground return path impedance is necessary.
On the negative side, by using the 2 and 20 attenuation, the amplifier must
provide for another gain of two, making the system optimization more difficult.
Fortunately, for modern amplifiers, driving an AD converter, the gain requirement is
low, since the converter requires only a volt or two for a full range display; in contrast,
a conventional scope CRT requires tens of volts on the vertical deflecting plates.
Thus a factor of at least 10 in gain reduction (and a similar bandwidth increase!) is in
favor to modern circuits.
Whilst the gain requirements are relaxed, modern sensitive circuits require a
higher attenuation to cover the desired signal range. But obtaining a 200:1 attenuation
can be difficult, because of capacitive feed through: even a 0.1 pF from the input to
- 5.36 -
P.Stari, E.Margan
the buffer output, together with a non-zero output impedance, can be enough to spoil
the response. If we can tolerate a feed through error of one least significant bit of an 8
bit analog to digital converter, the 200:1 attenuator would need an effective isolation
of #! log"! a#!! #) b *% dB, which is sometimes hard to achieve even at audio
frequencies, let alone GHz. A cascade of two sections could be the solution.
5.2.2 Attenuator Inductance Loops
Designers life would be easy with only resistances and capacitances to deal
with. But every circuit also has an inductance, whether we intentionally put it in or
desperately try to avoid it. As we have learned in Part 2, in wideband amplifiers,
instead of trying to avoid the unavoidable, we rather try to put the inductance to use by
means of fine tuning and adequate damping.
In Fig. 5.2.8 we have indicated the two inductances associated with the
attenuator circuit. Because of the high voltages involved, the attenuator circuit can not
use arbitrarily small components, packed arbitrarily close together. As a consequence,
the circuit will have loop dimensions which can not be neglected and, since the
inductance value is proportional to the loop area, the inductance values can be
relatively large (for wideband amplifiers).
As for stray capacitance, the value of stray inductance can not be readily
predicted, at least not to the precision required. Each component in Fig. 5.2.8 will
have its own stray inductances, one associated with the internal component structure
and the other associated with the component leads, the soldering pads, and PCB
traces. These will be added to the loop inductance.
i
R1
50
L1
50
2-8pF
R2
Cp
C1
L2
R3
Rp
C2
Rd
LM
Co
2pF
Fig. 5.2.8: Inductances owed to circuit loops can be modeled as inductors in series with the
signal path. Note that in addition to the two self inductances there is also a mutual inductance
between the two. The actual values depend on the loops size, which in turn depends on the size
of the components and the circuits layout. Smaller loops have less inductance. Mutual
inductance can be reduced by shielding, although this can increase the stray capacitances.
Q
FW
.LW
.! .r LW
M
M
M
M
- 5.37 -
(5.2.43)
P.Stari, E.Margan
The current M and the magnetic field strength L are proportional: L M#<
for a single loop, where < is the loop radius. In a linear non-magnetic environment
(with the relative permeability .r ") M and F are also proportional because
F .L . Furthermore, .! is the free space magnetic permeability, also known as the
induction constant, the value of which has been set by the SI agreement about the
Ampere: .! %1 "!( [V s A" m" ]. This means that a current of 1 A encircling
once a loop area of 1 m# causes a magnetic field strength of 1 Vs. Because for a
circular loop W 1<# , our loop inductance equation can be reduced to:
P
.! E
.! 1 < #
.! 1 <
k<
#<
#<
#
(5.2.44)
- 5.38 -
P.Stari, E.Margan
to the P" as the attenuation ratio. Precision in this respect is difficult, but not
impossible to achieve. Our inductance expression Eq. 5.2.44 does not show it, but
inductance is also inversely proportional to trace width. Powerful finite element
numerical simulation routines will be required for the job.
However, the same trick can not be used for P# (no attenuation in this loop!).
Fortunately, as will become clear from the analysis below, the input inductance P" is
more critical than P2 , since the latter is loaded by a much smaller capacitance (Go )
and can be suitably damped with a larger resistance (Vd , which is already in the circuit
because it is required for the FET gate protection).
We shall analyze the attenuator loops by assuming perfectly matched time
constants, V" G" V# G# , matched also to the other attenuator paths, so that the
variable capacitor in parallel is not needed. Also, we shall replace the two 50 H
resistors with a single 25 H one, representing the effective signal source resistance Vs
in series with the input, with @i @g #. The loop inductances are represented by
discrete components, P" and P# in the forward signal paths, as drawn in Fig. 5.2.8.
In the second loop the first thing to note is that G# is many times larger than
Go (10500 , depending on the attenuation setting) and the same is true for Gp , which
means that their reactance will be comparably low and can thus be neglected.
Likewise, the resistances V# and Vp in parallel with these capacitances are large in
comparison with their reactances. What remains is the loop inductance P# in series
with Vd V$ , driving the amplifier input capacitance Go . If the attenuated input
voltage is @i E, the output voltage will be:
@o
@i
"
E =Go
"
=P# Vd V$
"
=Go
(5.2.45)
@i
E
"
P# Go
Vd V$
"
=# =
P#
P# G o
(5.2.46)
Since V$ is fixed and of quite low value, Vd is used to provide the desired damping.
The input loop analysis is similar. Here we have the equivalent source
resistance Vs V$ in series with P" , driving the equivalent input attenuator
capacitance Ga (Eq. 5.2.9; the attenuator resistance V" V# can be neglected at high
frequencies). At the top of the attenuator we have:
@i
@g
"
V$
#
=Ga
"
=P" Vs V$
"
=Ga
(5.2.47)
@g
"
V$
=
P" Ga
P"
Vs V$
"
#
= =
P"
P" G a
- 5.39 -
(5.2.48)
P.Stari, E.Margan
a" =Ga V$ b
P" G a
P"
P" G a
(5.2.49)
It is clear that the frequency of the zero, "Ga V$ , is much higher than the
frequency of the pole pair, "P" Ga . Also, if P" P# and Ga is at least 5 to 10
times larger than Go , then Ga will dominate the response. Fortunately, as discussed
above, with a clever layout of components and a suitable ground plane, P" can be
broken into P"a and P"b , so that P"b is in the ground return path. If we can make
P"a *P"b we would achieve an effective inductance compensation in this loop.
We are thus left with the P# loop and its transfer function, Eq. 5.2.49.
However, this 2nd -order function will be transformed by the pole of the JFET source
follower into a 3rd -order function, owing to its capacitive loading.
Although the inductance is always caused by a current loop, the inductance of
a straight PCB trace can be estimated as some 710 nHcm (length), depending on the
trace width. In [Ref. 5.16] a good empirical approximation is offered:
P !# 6 ln
#6
A2
!##$&
!&
A2
6
(5.2.50)
where the trace length 6, width A and thickness 2 are all in mm, resulting in the
inductance in nH (no ground plane in this case!). With surface mounted components,
choosing capacitors with low serial inductance, and using miniature relay switches in
the attenuator, the inductance P# can be reduced to less than 10 nH, making the pole
(pair) at "P# Go high, compared to the source follower real pole (set by the
damping resistance Vd and the source follower loading capacitance GL (see the JFET
source follower discussion in Part 3, Sec. 3.9). However, by making P# somewhat
larger, say, 3050 nH, we can achieve a 3rd -order Bessel pole pattern, improving the
bandwidth and reducing the rise time. In Fig. 5.2.9 we see the attenuator circuit of the
E "! section, followed by a JFET source follower.
L 1a
Rs
25
3.3nH
R1
900k
R2
100k
L 1b
R3
2.74
C1
10pF
L2
C 2 0-50nH
90pF
Rd
D1
o
180
D2
Vcc
JFET 1
2N5911
R ss
50
C ss
800pF
L
Co
0.37nH
JFET 2
2N5911
CL
3.3pF
RL
12k
R ss
50 Vee
Fig. 5.2.9: The attenuator and the source follower JFET" (JFET# acts as a constant current
source bias for JFET" ). The i nput loop inductance P" should be low, but the attenuation
can be compensated (by P" b ). The inductance P# of the second loop can b etuned and damped
by an appropriate value of Vd to provide a Bessel step response, as seen in Fig. 5.2.10.
- 5.40 -
P.Stari, E.Margan
Note that here we have not drawn the protecting components Gp and Vp , but
since a 325 V (peak value of the 230 V AC-mains) at the input results in a 32.5 V at
the attenuator output, these components are absolutely necessary. Also, Gp should be a
high voltage type (500 V), in order to survive the 325 V in the direct path (and still
163 V for a 2 attenuator); therefore, it will be of larger dimensions, so its internal
serial inductance will have to be taken into account.
Note also that for high bandwidth a low value of Go must be ensured. Since
the negative input impedance compensation network (as in Part 3, Sec. 3.9), as well as
Vd , H" , H# , GGD , and GL are present at the @o node, Go will tend to be high.
We have analyzed the step response in Fig. 5.2.10 for two values of P# (10 and
50 nH; Vd has been chosen for a correct Bessel damping).
1.2
i
o2
10
1.0
o1
10
0.8
0.6
L1
L2
0.4
1) L 2 = 10nH
0.2
2) L 2 = 50nH
2
t [ns]
Fig. 5.2.10: Step response of the circuit in Fig. 5.2.9. With a low P1 , a correctly damped
P# , and a good JFET, a 350 MHz bandwidth (@ L2 rise time 1 ns), can be easily achieved.
The source follower gain is a little less than one. @o and @ L are drawn for the two P2 cases.
G &! & r
W
.
[F] [AsV]
(5.2.51)
where W is the plate area [m# ], . is their distance [m], &! ))& "!"# [As/Vm] is
the permittivity of the free space (vacuum) and &r is the relative permittivity of the
- 5.41 -
P.Stari, E.Margan
dielectric between the plates. A pad on the PCB thus has some small stray capacitance
towards the ground (large if a ground plane is used). This capaciatnce changes with
frequency proportionally with &r . Also, the material is porous and the fibres are long,
extending to the edge of the board, allowing moisture in (water &r )!), which
causes long term changes. The problem is not specific to this material only, it is
encountered with all traditional PCB materials (as well as many other insulators).
0.12
C2 min
0.10
C2 max
0.08
o
i
()
0.06
R1
C1
10pF
CPCB1
1-3pF
R2
C2
CPCB2
900k
0.04
o
100k
0.02
0
20
40
60
80
100
t [ s]
90pF
120
140
1-3pF
160
180
Fig. 5.2.11: The hookeffect is most noticeable in the frequency range 10300 kHz. Because
the relative permittivity, &r , of a common PCB material is not exactly constant with frequency,
the high impedance attenuator will exhibit a hook in its step response, which can not be trimmed
out by the usual adjustment of G# . The GPCB stray capacitance can vary by some 1030 %,
depending on the actual topology involved. Since G" is small, it is affected by a few percent. The
lower attenuator leg is less affected, due to a larger value of G# .
To solve this problem, special Teflon based material is used for instrument
front end, but it is expensive and not readily available. If it can not be obtained, one
possible solution could be to implement two large pads on a two sided PCB, in
parallel to G" and G# , with their areas in the same ratio as the attenuation factor
required [Ref. 5.67]. Then, the effect would be equally present in both legs, canceling
out the hook, Fig. 5.2.12. Even some trimming can be done by drilling small holes in
the larger pad pair (in contrast to cutting a pad corner, drilling removes the dielectric,
thus lowering both E and &).
Fig. 5.2.12: Canceling the hookeffect in the common PCB material is achieved by
intentionally adding two capacitances in form of large PCB pads, with areas in the same ratio as
required by the attenuation (since the area is proportional to the square of the linear dimensions,
for a 9:1 capacitance ratio, a 3:1 dimension ratio is needed). Trimming is possible by drilling
small holes in the larger pad.
- 5.42 -
P.Stari, E.Margan
The main problem with this solution is that the use of external probes will
expose the hook again, although to a lesser extent (owing to the large capacitance of
the probe compensation).
5.2.4 Improving the JFET Source Follower DC Stability
The DC performance of a JFET source follower is far from perfect. Even if we
use a dual JFET in the same case and on the same substrate, i.e., the Siliconix 2N5911
as in Fig. 5.2.9, their characteristics will not match perfectly. The 2N5911 data sheet
state a VGS mismatch of 10 mV maximum and a temperature drift of 20 VK. The
circuit in Fig. 5.2.13 offers moderate DC stability; the resistor VT is trimmed for a
zero @L to @in DC offset.
Vcc
from
attenuator
in
Vofs
R ss
50
C ss
800pF
CL
3.3pF
10nF
RT
100
510
RL
12k
62
Vee
Fig. 5.2.13: Simple offset trimming of a JFET source follower.
- 5.43 -
P.Stari, E.Margan
Basically, there are three ways of achieving a low DC error, each having its
own advantages and drawbacks. While DC performance is not of primary interest in
this book, it should be implemented so that high frequency performance is preserved.
The first technique is suitable for microprocessor controlled equipment, where
the input can be temporarily switched to ground, the offset measured, and the error
either adjusted by a digital to analog converter or subtracted from the sampled signal
in memory. But this operation should not be repeated too often or take a considerable
amount of time, otherwise the equipment would be missing valid trigger events or,
worse still, introduce errors by loading and unloading the signal source with the
instruments input impedance. This is a rather inelegant solution and it should be
taken as the last resort only.
A better way, shown in Fig. 5.2.14, is to use a good differential amplifier to
monitor the difference between the U" gate and the output, integrate it and modify the
U# current to minimize the offset. Note that this technique works well only while the
input is within the linear range of the JFET; when in the non-linear range or when
overdiven, the integrator will develop a high error voltage, which will be seen as a
long tail after the signal returns within the linear range. Also, owing to the presence
of V" and V# , the attenuator lower arm resistors will need to be readjusted.
Vcc
from
attenuator
R1
10M
in
R2
1M
R4
1M
R3
10M
C1
10nF
A1
C2
10nF
Q1
R ss
50
C ss
800pF
CL
3p3
Q2
RL
12k
47
Vee
R5
10k
Fig. 5.2.14: Active DC correction loop. The amplifier E" amplifies and integrates the
difference between the U" gate and the output, driving through V& the source of U# and
modifying its current to minimize the offset. The resulting offset is equal to the offset of E",
multiplied by the loop gain (" V$ V% ). The differential amplifier with a very low offset will
usually have its input bias current much larger than the JFET input current, therefore resistors
V# and V% provide a lower impedance to ground. G# is the integration capacitor, whilst G"
provides an equal time constant to the non-inverting input. The feedback divider, V$ and V%
should be altered to compensate for the system gain slightly lower than one (this is achieved by
adding a suitably low value resistor in series with V% ). For a low error the amplifier E" must
have a high common mode rejection up to the frequency set by G# and V$ llV% .
But the most serious problem is owed to the amplifier E" : in order to
minimize the system offset it should have both low offset and low input bias current
itself. Although E" can be a low bandwidth device, the low input error requirements
can easily put us back to where we started from.
The example in Fig. 5.2.14 is relatively simple to implement. However, for a
low error we must keep an eye on several key parameters. Ideally we would like to get
- 5.44 -
P.Stari, E.Margan
rid of the resistor V# (and V% ) to avoid the DC path to ground, because it alters the
attenuator balance.
Unfortunately, the input common mode range of the error amplifier is limited
and, more importantly, amplifiers with a low DC offset are usually made with bipolar
transistors at the input, so their input bias current can be in the nA range, much higher
than the JFET gates leakage (< #! pA). The bias current would then introduce a high
DC offset over V" (and V$ ). Here V# and V% come to the rescue, by conducting the
large part of the bias current to ground over their lower resistance. On the other hand,
the amplifier input offset voltage is then effectively amplified by the DC loop gain,
" V$ V% . The amplifier is selected so that the total offset error is minimized:
Zofs "
% ?V
V$ V%
V " V#
MA" ofs
ZA" ofs
V
V%
V" V #
(5.2.52)
where ?VV is the nominal resistor tolerance and ZA" ofs and MA" ofs are the amplifiers
voltage and current input offset, respectively.
An industry standard amplifier, the OP-07, has Zofs $! V and Mofs !% nA
typical, so by taking the resistor values as in Fig. 5.2.14 (with a 1 % tolerance), we can
estimate the typical total system offset to be within (#) V, which is slightly larger
than we would like. The offset can be reduced using a chopper stabilized amplifier,
such as the Intersils ICL-7650 or the LTC-1052 from Linear Technology, which have
a very low voltage offset (< 5 V) and low current offset (< 50 pA), but their switching
noise must be filtered at the output; also their input switches are very delicate and
must be well protected from over-voltage. Therefore, we can not do without V# and
V% and consequently the attenuator must be corrected by increasing the lower
resistance appropriately. See [Ref. 5.2] for more examples of such solutions.
The third technique involves separate low pass and high pass amplifier paths
and summing their outputs.
The example in Fig. 5.2.15 is made on the assumption that the sum of the two
outputs restores the original signal in both phase and amplitude. As the readers who
have tried to build loudspeaker crossover networks will know from experience, this
can be done correctly only for simple, first-order VG filters (with just two paths; for
higher order filters a third, band pass path is necessary).
i1
A1
i
A3
A2
i2
Fig. 5.2.15: The principle of separate low pass and high pass amplifiers.
Here the main problem is with the input of the low pass amplifier E" , which
must have an equally low input bias current as the high pass E# , but should also have
a very low voltage offset. Although in E" we don't need to worry about the high
- 5.45 -
P.Stari, E.Margan
frequency response, we are essentially again at the start, since JFETs and MOSFETs,
which have low input current, have a high offset voltage and vice versa for the BJTs.
But we can combine Fig. 5.2.14 and 5.2.15, and, by putting the VG network in
front of the source follower, we can eliminate the amplifier E# . Fig. 5.2.16 shows a
possible implementation.
Vcc
C1
330pF
R1
900k
R7
10M
R ss
50
R3
900k
A1
R2
100k
Q1
in
C2
10nF
R5
2.7k
C ss
800pF
CL
3.3pF
Q2
R4
100k
RL
12k
50
R6
300
Vee
Fig. 5.2.16: With this configuration we can eliminate the need for a separate high pass
amplifier. The DC correction is now applied to the U" gate through V( . The error integrating
amplifier E" must have a gain of 10 in order to compensate for the " V" V# and " V$ V%
attenuation. Resistors V" and V# now provide the 1 MH input impedance for all attenuation
settings, and this requires the compensated attenuators in front to be corrected accordingly.
C2
R1
1M
20pF
10nF
Vcc
in
R7
10M
A1
R2
1M
10k
R4
R3
100k
100k
R ss
50
Q2
R5
A2
Q1
C ss
800pF
CL
3.3pF
RL
12k
47
Vee
Fig. 5.2.17: By inverting the output the error amplifier becomes an inverting
integrator and the offset correction is independent from the attenuator settings.
The bootstrapping of V( produces an effective input resistance of about 2.4 GH.
- 5.46 -
P.Stari, E.Margan
Of course, now the DC error correction path must be returned to the current
source U# . The input resistor V" must be increased to 1 MH, since now the input of
E" is at the virtual ground; likewise V# must be equal to V" . Note that both E" and
E# offsets add to the final DC error.
Further evolution of this circuit is possible by combining a DC gain switching
(V# or V% adjusting) with input attenuation. A very interesting result has been
described in [Ref. 5.2], where also all input relays have been eliminated (using 3
source followers with the switching at their supply voltages by PIN diodes).
5.2.5 Overdrive Recovery
The integration loop will reduce the DC error only if the output follows the
input. However, under a hard overdrive the JFET will saturate and the integrator will
build up a charge proportional to the input overdrive amplitude and duration. When
the overdrive is removed, the loop will re-establish the original DC conditions, but
with the integration time constant, so the follower will exhibit a very long tail.
This is one of the most annoying properties of modern instrumentation,
because we often want to measure the settling time of an amplifier and a convenient
specification is the time from start of the transient to within 0.1 % of the final value.
With a good old analog scope we would simply increase the vertical sensitivity and
adjust the vertical position so that the final signal level is within the screen range. But
with modern DC compensated circuits this is not possible, and in order to avoid the
post-overdrive tail we must use a specially built external limiter, [Ref. 5.6], to keep
the input signal within the linear range of the scope. The quality and speed of this
limiter will also influence the measurement.
Note that simple follower circuits, like the one in Fig. 5.2.13, would also
exhibit a small but noticeable post-overdrive tail, mainly owed to thermal effects.
Also, high amplitude step response can be nonlinear, as shown in Fig. 5.2.18, owing to
the variation of the JFET gate to channel capacitance with voltage (Eq. 5.2.1819), but
the time constant involved here is relatively small.
overdrive
overdrive
Fig. 5.2.18: Large signal step response (but still well below overdrive) is nevertheless
nonlinear, caused by the variation of the JFET gatedrain capacitance with voltage.
- 5.47 -
P.Stari, E.Margan
V gs
Id
Id
I dss
p
n
p
V gs
Vp
G(B)
a)
b)
Id
Q1
V gs
Cgd
V ds
G
Cgs
+
V gs
Id
g mV gs
ro
S
c)
d)
Fig. 5.2.19: a) A typical n-channel JFET structure cross-section under the bias condition.
The p-type substrate is in contact with the p-type gate. The n-type channel is formed
between the source and the drain. The bias voltage depletes the channel. b) The Zgs Md
characteristic. c) The symbolic circuit. d) The equivalent circuit model.
- 5.48 -
P.Stari, E.Margan
Zg. 8J
Zbi
(5.2.53)
Zgs 8J
Zbi
(5.2.54)
where:
8J is the junction grading coefficient ("# for abrupt and "$ for graded
junctions; most JFETs are built with a graded junction);
Zbi
5B X
RA RD
ln
is the intrinsic zero bias built in potential;
;e
8#i
;e R A
RA
"
Zbi
# &Si
RD
(5.2.55)
Zgss #
Zbi
(5.2.56)
gm
"
#& "!$
"!! MHz
# 1 GT
'#) % "!"#
(5.2.57)
For MOSFETs, the situation is slightly different. Fig. 5.2.20 shows a typical nchannel MOS transistor cross-section and the equivalent circuit model.
- 5.49 -
P.Stari, E.Margan
S V gs
Metal
SiO 2
Id
SiO 2
V ds
Id
SiO 2
n+
n+
I dss
V gs
B
V sub
bias induced
n-type channel
a)
Q1
b)
Cgd
Id
Vds
V gs
V gsoff
Cgs
V sub
Id
g mV gs
+
V gs
g mbV sb
Cgb
Csb
c)
V sb
ro
Cdb
d)
Fig. 5.2.20: a) A typical n-channel MOSFET structures cross-section under bias condition.
Two heavily doped n+ regions (source and drain) are manufactured on a p-type substrate and a
metal gate covers a thin insulation layer, slightly overlapping the n+ regions. The bias voltage
depletes a thick region in the substrate, within which an n-type channel is induced between the
source and the drain. b) The Zgs Md characteristic. c) The symbolic circuit. d) The equivalent
circuit model has two current sources, one owed to the usual mutual transconductance gm and
the gatesource voltage Zgs ; the other is owed to the so called body effect transconductance
gmb and the associated sourcebody voltage Zsb . The gmb is typically an order of magnitude
lower than gm .
From the MOSFET structure cross-section it can be deduced that Ggb is small,
owing to the relatively large depleted region. Ordinarily its value is about 0.1 pF and it
is relatively constant. Likewise the depletion region capacitances Gsb and Gdb are also
small (they are proportional to the gate and source area), but they are voltage
dependent:
Zsb
Zbi
"
#
(5.2.58)
Zdb
Gdb0 "
Zbi
"
#
(5.2.59)
The capacitances Ggs and Ggd are owed to the SiO# insulation layer between
the gate and the channel. If Wg is the gate area and Gx is the unit area capacitance of
the oxide layer under the gate, then the total capacitance is:
Ggs0 Ggd0 Wg Gx
(5.2.60)
Most MOSFETs are built with symmetrical geometry, thus the total zero bias
capacitance is simply split in half. But in the saturation region the channel narrows, so
- 5.50 -
P.Stari, E.Margan
the drain voltage influence is small, resulting in a nearly constant Ggd whose value is
essentially proportional to the small gatedrain overlapping area. Thus typical Ggd
values range between 0.002 and 0.020 pF.
Ggs is larger, typically some #$ of the Wg Gx value, or about 12 pF.
Although MOSFET's gm is typically lower than in JFETs, it is the very small
capacitances, in particular Ggd and Ggb , which are responsible for the wider bandwidth
of a MOSFET source follower. Cut off frequencies of many GHz are easily achieved.
5.2.7 Input Protection Network
The input protection network is needed for two distinct real life situations. The
first one is the (occasional) electrostatic discharge, the second one is a long term
overdrive.
Imagine a technician sitting on a well insulated chair, wearing woollen or
synthetic clothes, and rubber plated shoes, repairing a circuit on his bench. For a while
he rubs his clothes on the chair by reaching for the schematic, the spare parts, some
tools, etc., thus quickly charging himself up to an average 500 V. Suddenly, he needs
to put a 1:1 scope probe somewhere on the rear panel and he stands up, touching the
probe to identify its trace by the characteristic capacitive AC mains pickup. By
standing up, he has increased his average distance from the chair by a large factor, say
30, but the charge on the chair and his clothes remains unchanged. This is equivalent
to charging a parallel plates capacitor and then increasing the plates distance, so that
the capacitance drops inversely to the distance (Eq. 5.2.51). Because Z UG , his
effective voltage would increase 30 times, reaching some 15 kV!
The average capacitance of the human body towards the surroundings of an
average room is about 200 pF. So, when our technician touches the probe tip, he will
discharge the 15 kV of his 200 pF right into the input of the poor scope. And such a
barbaric act can be repeated hundreds of times during an average repairing session.
At the instant the probe tip is touched the effective input voltage falls for the
first 5 ns (the propagation delay of the 1 m long probe cable) to a level set by the
resulting capacitive divider + "a" Gcable Gbody b, so if Gcable "!! pFm,
Z + Zbody "! kV. Here we assume a signal propagation velocity of 0.2 mns
(about 2/3 of the speed of light). Also, note that the probe cable is made as a lossy
transmission line (the inner conductor is made of a thin resistive wire, about 50 Hm).
After 5 ns the cable capacitance is fully charged and the signal reaches the
spark gap. The spark gap fires, limiting the input voltage to its own breaking voltage
(15002000 V), providing a low impedance path to ground and discharging
Gcable Gbody . Some 25 ns later the voltage falls below the spark threshold.
Now the total capacitance Gcable Gbody Gin is discharged into the
remaining input resistance. With the attenuator set to highest sensitivity (1:1), the
input resistance is equal to the 1 MH of Vin , in parallel to the series connection of the
damping resistor Vd and one of the protection diodes (depending on the voltage
polarity). The diode must withstand a peak current Mdpk Zspark Vd ; if Vd "&! H,
then Mdpk #!!!"&! "$$ A! Fortunately, the peak current is lowered also by the
loop inductance. The spark discharges the capacitance in less than $! ns and then the
- 5.51 -
P.Stari, E.Margan
Vcc
t =0
12
body
15kV
C body
200pF
100pF/m
d = 5ns
D1
spark
iR d
Rd
2kV
150
gate
Rin
1M
C in
20pF
6V3
JFET 1
D2
(Vee)
gate [V]
i R d [A]
body [kV]
Z 0 = 50
R inCT
R d CT
CT = Cbody + Ccable + Cin
spark [kV]
0
50
100
150
200
250
t [ns]
Fig. 5.2.21: A human body model of electrostatic discharge into the oscilloscope input. About
5 ns after touching the probe tip the probe cable is charged and the voltage reaches the spark gap.
The spark gap fires and limits the voltage to its firing threshold. The arc provides a low
impedance path discharging the body and cable capacitance until the voltage falls below the firing
threshold (~25 ns). The remaining charge is fed through Vd and one of the protection diodes, until
the voltage falls below Zcc ZH" (~250 ns). Finally, the capacitance is discharged through Vin .
A different situation occurs in case of a long term overdrive. Fig. 5.2.22 shows
the protection network.
Cp
i in
in
230V
50Hz
Vcc
D1
1.5nF
Rp
Rd
150k
150
Av =+1
D2
Vee
Fig. 5.2.22: Input protection network for long term overdrive.
The most severe long term input overdrive occurs when the oscilloscope input
is on its highest sensitivity setting (no attenuation) and the user inadvertently connects
a 1:1 probe to a high voltage DC or AC power supply. A typical highest sensitivity
setting of 2 mVdiv or 8 mV range is brutally exceeded by the 230 Veff , 650 Vpp AC
mains voltage. Since with a well designed instrument nothing dramatic would happen
- 5.52 -
P.Stari, E.Margan
(no flash, no bang, no smoke), the user might realize his error only after a while (a few
seconds at best and several minutes in the evening at the end of a long working day).
The instrument must be designed to withstand such a condition for indefinitely long.
With component values as in Fig. 5.2.22 the peak current through Vp is:
MV pk aZinpk Zcc bVp a$#& "!b"&! "!$ #" mA
and the peak current through Gp :
MG pk aZinpk Zcc b=Gp a$#& "!ba#1 &! "& "!* b !"& mA
Of course, MG leads MV in phase by 1#, so the total current through Vp is the
vector sum, MG# MV# , and its value is essentially that of MV , since the mains
frequency (5060 Hz) is much lower than the network cutoff, "a#1Gp Vp b or 707 Hz.
One could easily be unimpressed by such low current values, however we
must not forget the transient conditions. With abundant help from Mr Murphy, we
shall make the connection at the instant when the mains voltage is at its peak. And
Mr Gauss will ensure a 50% probability that the instantaneous voltage will be above
the effective value. Then the input current is limited by Vd only (2.1 A peak!).
Fortunately the current falls exponentially with the Vd Gp time constant (225 ns), so
the transient is over in about 1 s.
The value of Gp should not be too low, either; note that it forms a capacitive
divider with the JFET input capacitance and ground strays. If these are about 1.5 pF,
the high frequency gain will be lower than at DC by about 0.1 %.
Note also that all these components must be specified to survive voltage
transients of at least 500 V, so their larger physical dimensions will increase the circuit
size, and as a consequence the parasitic loop inductance and stray capacitances.
Note also that by using a 2 basic input attenuation, the 150 kH and 1.5 nF can
be safely eliminated because the attenuator takes over their function.
5.2.8 Driving the Low Impedance Attenuator
The high impedance attenuator, discussed in Sec. 5.2.1, is almost exclusively
implemented as a two or three decade switch. The intermediate attenuation and gain
settings of the 125 sequence vertical sensitivity selector are usually realized in the
stages following the FET source follower. For highest bandwidth the 125 attenuator
is designed as a 50 H resistive divider and there are some advantages (regarding the
linear signal handling range) if this attenuator is put immediately after the FET source
follower. However, the FET by itself can not drive such a low impedance load and
additional circuitry is required to help it to do so.
An interesting solution is shown in Fig. 5.2.23, patented by John L. Addis in
1983 [Ref. 5.21].
The input FET U" is biased by the constant current source U# , as we have seen
in Fig. 5.2.13. It is also actively compensated for large signal transient nonlinearity (by
G# and U% ) and bootstrapped by U$ , which reduces the input capacitive loading by
- 5.53 -
P.Stari, E.Margan
keeping the voltage at Ggd nearly constant. The output complementary emitter
follower is driven from the U" source and V$ (V$ should be equal to V% to reduce the
DC offset). The U$ bootstrap is provided by H^# , which, together with H^$ , also
bootstrap the bias circuit (V*,"! and H$,% ) for the bases of U& and U' , lowering in this
way the load to U" source.
Vcc
R7
Q3
D2
C1
in
R8
Q5
R1
R2
Q1
D3
R3
R9
R5
Q4
C2
R 13
R 10
DZ 2
R 15
R 14
RL
i q2
D1
D4
DZ 1
R 11
Q6
Q2
i q1
R4
R6
DZ 3
R 12
Vee
Fig. 5.2.23: The FET source follower with a buffer is capable of driving
a low impedance load (such as a 50 H 125 attenuator section).
Bootstrapping increases the DC and low frequency impedance seen by U" , but
note that its use will make sense at high frequencies only if the U&,' and U$ are
substantially faster than the FET itself. Otherwise, the bootstrap circuitry would only
increase the parasitic capacitances and thus increase the rise time.
With enough driving capability made available by U& and U' the load resistor
VL can now be realized as the low impedance three step attenuator with a direct path
and two attenuated paths, 2 and 4. Besides the usual maximum input sensitivity of
5 mVdiv., this attenuator will provide the next two settings of 10 and 20 mVdiv.
The following lower sensitivity settings (50 mVdiv., etc.) are achieved by switching
in the 10 and 100 sections of the high impedance input attenuator, achieving the
lowest sensitivity of 2 Vdiv. An external 10 probe will decrease this to 20 Vdiv.
Fig. 5.2.24 shows two possible implementations of the low impedance divider,
having 50 H impedance at both input and output. The first attenuator design is based
on the 7 -type network and the other on the 1-type network. If the input signal is a
current, the series 50 H in the 1 branch can be omitted.
- 5.54 -
P.Stari, E.Margan
25
12.5
50
50
37.5
40.625
50
25
25
37.5
25
1
2
37.5
12.5
Fig. 5.2.24: The low impedance attenuator (50 H input and output) can be
built as a straight 7 -type ladder or a 1-type ladder. If driven by a current
source the series 50 H in the 1 branch can be omitted.
Here we assume that the input impedance of the following amplifier stage is
high enough and its input capacitance is low enough for the division factor to remain
correct at each setting and the bandwidth does not change. It is also important to keep
the switch capacitive cross-talk low and preserve the nominal impedance by designing
it as a microstrip transmission line.
In addition to the discrete step attenuation, oscilloscopes, as well as other high
speed instruments, often need a continuously variable attenuation (or gain), although
within a restricted range (a range of 0.3 to 1 is often enough). A passive potentiometer
is, of course, an obvious solution and it was used extensively in early days. However,
this potentiometer is usually placed somewhere in the middle of the amplifier and its
control shaft has to be brought to the instrument front panel, which can often be a
mechanical nightmare. Also, its variable impedance causes the bandwidth to vary and
this is a very undesirable property. An electronically controlled amplifier gain with
constant bandwidth would therefore be welcome. We will examine such circuits at the
end of Sec. 5.4.
- 5.55 -
P.Stari, E.Margan
- 5.57 -
P.Stari, E.Margan
The feedback concept is so simple and works so well that too many people
take it for granted; and equally many are surprised to discover that it can cause as
much trouble as the solutions it offers.
Slow PNP
"lateral"
transistor
Rc
V cc
Q3
Rs
Q1
Q2
I1
Re
Q4
Ccb
I2
I3
V ee
Rf
Fig. 5.3.1: The classical opamp, simplified. The input differential pair U" and U#
subtract the feedback from the input signal, driving with the difference the level
translator stage U$ , which in turn drives the output emitter follower U% , which
provides a low output impedance. The feedback voltage is derived from the output
voltage @o by dividing it as " Ve Ve Vf . If the opamp open loop gain Eo is
much higher than "" , the closed loop gain is Ecl "" (see text).
In the (highly simplified) opamp circuit in Fig. 5.3.1 the open loop gain is
equal to the gain of the differential pair U" and U# , multiplied by the gain of the level
translator U$ . The output emitter follower U% has a unit voltage gain. However, all
the three stages have their gain frequency dependent, as was explained in Part 3.
Fortunately, the three poles are far apart (all are real) and the poles of the first and the
third stage can be (and usually are) easily set high enough for the amplifier open loop
frequency response to be dominated by the second stage pole (which was in turn
named the dominant pole).
The dominant pole of the circuit in Fig. 5.3.1 is set by the U" collector resistor
Vc and the Miller capacitance GM :
=!
"
Vc G M
(5.3.1)
where we have neglected the input resistance of U$ in parallel with Vc , which we can
do if Vc is small.
GM appears effectively in parallel with Vc and its value is equal to the U$
collector to base capacitance Gcb , multiplied by the U$ gain:
GM Gcb a" E$ b
(5.3.2)
;e M #
Vb%
5B X
- 5.58 -
(5.3.3)
P.Stari, E.Margan
(5.3.4)
;e M "
Vc
5B X
(5.3.5)
where we have assumed both transconductances to be equal, owing to the current M"
being equally divided into Mc" and Mc# .
As a result the open loop transfer function can be written as:
=!
=!
Ea=b E!
E" E$
= =!
= =!
(5.3.6)
Now we can derive the closed loop transfer function. By considering that the
feedback factor " is set by the feedback resistive divider:
"
Ve
Ve Vf
(5.3.7)
the voltage at the inverting input is equal to the output voltage multiplied by the
feedback factor:
@i @o "
(5.3.8)
The signal being amplified is the difference between the source voltage and
the voltage provided by feedback:
?@ @s @i
(5.3.9)
This voltage is amplified by the amplifier open loop transfer function, Ea=b, to
give the output voltage:
@o Ea=b ?@
(5.3.10)
(5.3.11)
(5.3.12)
=!
=!
@s
E!
= =!
= =!
- 5.59 -
(5.3.13)
P.Stari, E.Margan
or:
"
"
=!
E!
= =!
@o
@s
(5.3.14)
@s
(5.3.15)
"
From this last expression it is obvious that if the open loop gain E! is very
high the amplifier gain @o @s is reduced to the familiar "" , or aVf Ve bVe .
Likewise, for a finite value of E! , the frequency dependent part increases, thus
lowering the closed loop gain at higher frequencies.
Take, for example, the A741 opamp, Fig 5.3.2: owing to its dominant pole,
the open loop cut off frequency is at about 10 Hz, whilst the open loop gain at DC is
about 10& . The unity gain crossover frequency 0" is therefore about 1 MHz.
100
f0
A0
A0 3dB
80
A( f )
A( f )
A( f ) =
60
0
[ o]
A [dB]
(f )
90
40
A CL =
20
= 1+ Rf /Re
Re
10
180
A( f )
20
100
fh
f1
Rf
1k
10k
f [Hz]
100k
1M
10M
Fig. 5.3.2: A typical opamp open loop gain and phase compared to the closed loop
gain. The dashed lines show the influence of a secondary pole (usually the input
differential stage pole), which, for stability requirements, must be set at or above the
unity gain transition frequency, 0" 1 MHz. 0h is the closed loop cutoff frequency.
For a closed loop gain of 10, " !"; since the frequency dependence term is
a ratio, the factor #1 can be extracted and canceled, leaving 0! a40 0! b, where 0! is
the open loop cutoff frequency. By putting this into Eq. 5.3.15, we see that the
amplifier will be making corrections to its own non-linearities by a factor 10% (80 dB)
at 1 Hz, but only by a factor of 10# (40 dB) at 1 kHz; and at 100 kHz there would be
- 5.60 -
P.Stari, E.Margan
only 3 dB of feedback, resulting in a 50% gain error. This means that for a source
signal of 0.1 V there would be a ?@ of 0.05 V, resulting in an output voltage of
@o 0.5 V (instead of the 1 V as at low frequencies). Above the closed loop cutoff
frequency the amplifier has practically no feedback at all.
An additional error is owed to the phase shift: at 100 kHz a single pole
amplifier would have the output at 90 phase lag against the input. An amplifier with
an additional input differential stage pole at 1 MHz would shift the phase by 135, so
there would be only a 45 phase margin at this frequency and the circuit would be
practically at the edge of closed loop stability. If we were to need this amplifier to
drive a 2 m long coaxial cable (capacitance 200 pF), by considering the amplifier
output impedance of about 75 H the additional phase shift of 5 would be enough to
turn the amplifier into a high frequency oscillator.
5.3.2 Slew Rate Limiting
The discussion so far is valid for the small signal amplification. For large
signals the bandwidth would be much lower than the small signal one. This is owed to
the Miller capacitance causing U$ to act as an integrator. For a positive input step
larger than # 5B X ;e (M" Ve" if the input differential pair has emitter degeneration
resistors), the transistor U" will be fully open, while U# will be fully closed.
Therefore, the maximum current available to charge GM will be equal to the tail
current M" . The voltage across GM will increase linearly until the input differential
stage will be out of saturation. Consequently, the slew rate limit is:
SR
.@
M"
.>
GM
(5.3.16)
Usually M" is of the order of 100 A (or even lower if low noise is the main design
goal). Also, owed to the gain of U$ the Miller capacitance GM can be large; say, with
Gcb 4 pF and E$ 50, GM will be about 200 pF, giving a slew rate SR 0.5 Vs.
We know that for a sine wave the maximum slope occurs at zero crossing, where the
derivative is .@.> . aZp sin =>b.> = Zp cos =>; at zero crossing > ! and
cosa!b ", so the slew rate equation can be written as:
SR = Zp
M"
GM
(5.3.17)
For a supply voltage of 15 V, the signal amplitude just before clipping would
probably be around 12 V, so the maximal full power sine wave frequency would be
0max M" #1 GM Zp , or approximately 6.5 kHz only!
The frequency at which the sine wave becomes a linear ramp, with a nearly
equal peak amplitude, is slightly higher: 0r "%>r SR% Zp 10 kHz (note that
the SR of the circuit in Fig. 5.3.1 is not symmetrical, since GM is charged by M" and
discharged through Vc ; in an actual opamp circuit, such as in Fig. 5.3.3, Vc is
replaced by a current mirror, driven by the collector current of U# , giving a
symmetrical slew rate).
- 5.61 -
P.Stari, E.Margan
Vcc
Q5
Q6
Q1
Rs
Ccb
I1
Q2
A3
+1
I1
V ee
Re
Rf
Fig. 5.3.3: Simplified conventional opamp circuit with the current mirror as the active
load to U" . The second stage is modeled as a Miller integrator with large gain. This
circuit exhibits symmetrical slew rate limiting. The dominant pole =! gm GM , where
gm is the differential amplifiers transconductance and GM Gcb a" E$ b.
Q2
Q1
i fb
I1
Re
Q4
C cb
Rs
V cc
Q3
I2
I3
V ee
Rf
Fig. 5.3.4: Current feedback opamp, derived from the voltage feedback opamp (Fig. 5.3.1):
we first eliminate U# from the input differential amplifier and introduce the feedback into
the U" emitter (low impedance!) Next, we load the U" collector by a diode connected U# ,
forming a current mirror with U$ . Finally, we use very low values for Vf and Ve . The
improvements in terms of speed are two: first, for large signals, the current available for
charging Gcb is almost equal to the feedback current 3fb , eliminating slew rate limiting;
second, Gcb is effectively grounded by the low impedance of U# , thus avoiding the Miller
effect. A disadvantage is that the voltage gain is provided by U$ alone, so the loop gain is
lower. Nevertheless, high frequency distortion can be lower than in classical opamps,
because, for the equivalent semiconductor technology, the dominant pole is at least two
decades higher, providing more loop gain for error correction at high frequencies.
- 5.62 -
P.Stari, E.Margan
The amplifier in Fig. 5.3.4 would still run into slew rate limiting for high
amplitude signals, owing to the fixed bias of the first stage current source M" . This is
avoided by using a complementary symmetry configuration, as shown in Fig. 5.3.5. Of
course, the complementary symmetry can be used throughout the amplifier, not just in
fist stage.
Q5
Q3
Q1
ZT
C T RT
Q4
Re
Q6
i fb
Q11
Q9
o
Q10
Q2
Rs
Vcc
Q7
Q12
Q8
V ee
Rf
Fig. 5.3.5: A fully complementary current feedback amplifier model. It consists of four parts:
transistors U"% form a unity gain buffer, the same as U*12 , with the four current sources
providing bias; U&,( and U',) form two current mirrors. In contrast to the voltage feedback
circuit, both of whose inputs are of high impedance, the inverting input of the current feedback
amplifier is a low impedance output of the first buffer. The current flowing in or out of the
emitters of U$,% is (nearly) equal to the current at the U$,% collectors. This current is reflected
by the mirrors and converted into a voltage at the U(,) collectors, driving the output unity gain
buffer. The circuit stability is ensured by the transimpedance ^T , which can be modeled as a
parallel connection of a capacitor GT and resistor VT . The closed loop bandwidth is set by Vf
and the gain by Ve (the analysis is presented later in the text). One of the first amplifiers of
this kind was the Comlinear CLC400.
- 5.63 -
P.Stari, E.Margan
poly-Si
B
PNP Transistor
poly-Si
B
SiO2
SiO2
p-iso
L epi
n-Epi
n+ buried layer
p-iso
p-well
p+ buried layer
n-well
p-iso
L epi
p-Substrate
p-iso
p-Substrate
VIP2 (1988)
NPN PNP
250
80
150
40
0.8
0.5
1.5
1.8
11
18000
36
150
50
120
40
V
3
2.5
9
8
GHz
0.5
0.8 0.005 0.007 pF
2
1
m
2400
300
m#
32
12
V
Table 5.3.1: Typical production parameters of the Complementary Bipolar process [Ref. 5.33]
Although the same technology is now used also for conventional voltage
feedback amplifiers, the current feedback structure offers further advantages which
result in improved circuit bandwidth, as put in evidence by the following discussion.
The stability of the amplifier in Fig. 5.3.5 is ensured by the so called
transimpedance network, ^T , which can be modeled as a parallel VT GT network.
Note that the compensating capacitor, GT (consisting of 4 parts, GT" GT% ), is
effectively grounded, as can be seen in Fig. 5.3.7, since Gcb of U*,"! are tied to the
supply voltages directly, whilst the Gcb of U(,) are tied to the supply by the low
impedance CE path of U&,' .
Q5
V cc
Q7
Q11
C T1
Q9
Q3
C T3
Q4
Q6
RT
C T2
C T4
Q10
Q12
Q8
V ee
Fig. 5.3.7: The capacitance GT consists of four components, all effectively grounded.
- 5.64 -
P.Stari, E.Margan
+1
A1
Vcc
i fb
RT
i fb
Re
ie
Rf
fb
ZT
+1
A2
CT
Vee
M2
if
Fig. 5.3.8: Current feedback amplifier model used for the analysis.
Imagine for a moment that Vf is taken out of the circuit. Essentially this would
be an open loop configuration, the gain of which can be expressed by the ratio of two
resistors, VT Ve . The gain is provided by the current mirrors PM ; their output
currents are summed, so the two mirrors behave like a single stage; consequently the
gain value, compared with that of a conventional opamp, is relatively low (in practice,
maximum gains between 60 and 80 dB are common). It is important to note, however,
that the open loop (voltage) gain does not play such an important role in current
feedback amplifiers. As the name implies, it is more important how the feedback
current is processed.
Let us now examine a different situation: we put back Vf and disconnect Ve .
If there were to be any voltage difference between the outputs of the two buffers, a
current would be forced through Vf , increasing the output current of the first buffer,
E" . The two current mirrors would reflect this onto the input of the second buffer, E# ,
in order to null the output voltage difference. In other words, the output of the first
buffer E" represents an inverting current mode input of the whole system.
If we now reconnect Ve it is clear that the E" output must now deliver an
additional current, 3e , flowing to the ground. The current increase is reflected by the
mirrors into a higher @T , so the output voltage @o would increase, forcing the current
- 5.65 -
P.Stari, E.Margan
3f (through Vf ) into the E" output. By looking from the E" output, 3e flows in the
direction opposite to 3f , so the total current 3fb of the E" output will be equal to their
difference. Thus with Ve and Vf a classical feedback divider network is formed, but
the feedback signal is a current. As expected, the output of E# must now become
aVf Ve bVe times higher than the output of E" to balance the feedback loop.
Deriving the circuit equations is simple. The transimpedance equation
(assuming an ideal unity gain buffer E# , thus @T @o ) is:
@o ^T 3fb
(5.3.18)
The feedback current (assuming an ideal unity gain buffer E" , thus @fb @s ) is:
3fb
@s
@o @ s
"
"
@o
@s
Ve
Vf
Ve
Vf
Vf
(5.3.19)
"
"
Vf
^T
(5.3.20)
We see that the equation for the closed loop gain has two terms, the first one
resulting from the feedback network divider and the second one containing the
transimpedance ^T and Vf , but not Ve ! This is in contrast to what we are used to in
conventional opamps.
If we now replace ^T by its equivalent network, "a= GT "VT b, then the
closed loop gain Eq. 5.3.20 can be rewritten as:
@o
Vf
"
@s
Ve
"
Vf
"
= GT Vf
VT
(5.3.21)
We can rewrite this to reveal the systems pole, in the way we are used to:
@o
@s
Vf
"
Vf
"
V
G
Ve
T
T Vf
Vf
Vf
"
"
= "
VT
VT
GT V f
"
(5.3.22)
By comparing this with the general single pole system transfer function:
J a=b E!
="
= ="
(5.3.23)
Vf
"
VT
GT V f
(5.3.24)
is the closed loop pole, which sets the closed loop cutoff frequency: 0h =" #1.
- 5.66 -
P.Stari, E.Margan
(5.3.25)
"
"
"
"
Vf
Ve
" Vf VT = GT Vf
(5.3.26)
Therefore we can express 3GT (see the transient response in Fig. 5.3.9) as:
3GT 3fb @s
"
"
Vf
Ve
=
Vf
"
= "
VT
GT V f
- 5.67 -
(5.3.27)
P.Stari, E.Margan
(arbitrary units)
i fb
2
t [ns]
Fig. 5.3.9: Current on demand: The step response reveals that the feedback
current is proportional to the difference between the input and output voltage,
essentially a high pass version of the output voltage, as shown by Eq. 5.3.27.
In Fig. 5.3.10 we compare the cut off frequency vs. gain of a voltage feedback
and a current feedback amplifier. The voltage feedback amplifier bandwidth is
inversely proportional to gain; in contrast, the current feedback amplifier bandwidth
is, in principle, independent of gain. This property makes current feedback amplifiers
ideal for wideband programmable gain applications.
100 A
0
s
A0 3dB f
0
80
Re
A( f )
A [dB]
60
+
A( f )
Rf
A CL =
40
A CL = 100
20
Re
ifb
f Ch2
f Vh1
f Ch1
A CL = 1
f Vh0
20
+
ZT( f )
Rf
= 1+ R f / R e
f Vh2
A CL = 10
10
100
1k
10k
100k
f [Hz]
1M
f Ch0
10M
100M
1G
Fig. 5.3.10: Comparison of closed loop cut off frequency vs. gain of a conventional and a
current feedback amplifier. The conventional amplifier has a constant GBW product (higher
gain, lower cut off). But the current feedback cut off frequency is (almost) independent of gain.
- 5.68 -
P.Stari, E.Margan
Qb
Q3
R1
I bb
R3
Q1
+
fb
Q2
+1
A1
Ro
i fb
R4
Re
R2
Rf
Q4
Qc
Qd
Fig. 5.3.11: The resistors V"% used to balance the inputs are the cause for the non-zero
inverting input resistance, V$ ||V% , modeled by Vo ; this causes an additional voltage drop,
reducing the feedback current (see analysis).
- 5.69 -
P.Stari, E.Margan
(5.3.28)
Vf
Ve
Vo
(5.3.29)
Note that the last term in this equation is the feedback current from Eq. 5.3.28:
3fb
@s @fb
Vo
(5.3.30)
and from the transimpedance equation Eq. 5.3.18 the feedback current required to
produce the output voltage @o is:
3fb
@o
^T
(5.3.31)
By substituting @fb and 3fb in Eq. 5.3.28, we obtain the transfer function:
@o
@s
Vf
Ve
Vf
Vo
Vf
" "
Ve
^T
^T
"
(5.3.32)
^T
"
"
=GT
VT
(5.3.33)
we obtain:
@o
@s
"
" "
Vf
Ve
Vf
Vo
Vf
Vf
=GT Vf Vo "
Ve
VT
VT
Ve
(5.3.34)
which we re-order in the usual way to separate the DC gain from the frequency
dependent part:
- 5.70 -
P.Stari, E.Margan
Vf
Vo
Vf
Ve
VT
VT
Vf
GT Vf Vo "
Ve
Vf
Vo
Vf
" "
Ve
VT
VT
=
Vf
GT Vf Vo "
Ve
" "
@o
@s
Vf
Ve
Vf
Vo
Vf
" "
Ve
VT
VT
"
(5.3.35)
Again, a comparison with the general normalized first-order transfer function:
J a=b E!
="
= ="
(5.3.36)
Vf
Ve
E!
Vf
Vo
Vf
" "
Ve
VT
VT
"
="
Vf
Vo
Vf
Ve
VT
VT
Vf
GT Vf Vo "
Ve
(5.3.37)
" "
(5.3.38)
"
l=" l
#1
(5.3.39)
Vf
Ve
(5.3.40)
Vo
Vf
VT
VT
(5.3.41)
@o
Ecl
@s
"&
"&
GT aVf Vo Ecl b
"&
=
GT aVf Vo Ecl b
- 5.71 -
(5.3.42)
P.Stari, E.Margan
Whilst the gain error & is small, the bandwidth error can be rather high at high
gain; e.g., if Vf " kH and Vo "! H, with Ecl "!!, the time constant would
double and the bandwidth would be halved. In Fig. 5.3.12 we have plotted the
bandwidth reduction as a function of gain for a typical current feedback amplifier.
Although not constant as in theory, the bandwidth is reduced far less than in voltage
feedback opamps (about 50 less for a gain of 100).
10
A CL = 100
f h2
Re = 10.1
i fb
+1
+1
RT
Ro
10
A CL = 10
Re = 110
CT
Re =
i fb
Rf
10
Re
1
10
10 6
Rf
Acl 1
f h1
A CL = 1
Re =
C T = 1.6 pF
RT = 300 k
Rf = 1 k
Ro = 10
10 7
f [Hz]
f h0
10 8
10 9
Fig. 5.3.12: The bandwidth of an actual current feedback amplifier is gain dependent, owing to
a small but finite inverting input resistance Vo . The nominal bandwidth of 100 MHz at unity gain
is reduced to 90 MHz at the gain of 10 and to only 50 MHz at the gain of 100 if Vo is 10 H.
Nevertheless, this is still much better than in voltage feedback amplifiers.
From these relations we conclude two things: first, both the actual closed loop
gain and bandwidth are affected by the desired closed loop gain Ecl " Vf Ve ;
second and more important, for a given Vo , we can reduce Vf by Vo Ecl and
recalculate the required Ve to arrive at slightly modified values which preserve both
the desired gain and bandwidth!
Note that in the above analysis we have assumed a purely resistive feedback
path; additional influence of Vo will show up when we shall consider the effect of
stray capacitances in the following section.
5.3.5 Noise Gain and Amplifier Stability Analysis
A classical voltage feedback unity gain compensated amplifier (for which any
secondary pole lies above the open loop unity gain crossover) usually remains stable if
a capacitor Gf is added in parallel to Vf , as in Fig. 5.3.13a. Because Gf lowers the
bandwidth, it is often used to prevent problems at and above the closed loop cutoff. In
contrast, a capacitor Ge in parallel with Ve , as shown in Fig. 5.3.13b, would reduce the
feedback at high frequencies, leading to instability.
- 5.72 -
P.Stari, E.Margan
Rs
Rs
Re
Rf
Rf
Ce
Re
Cf
Fig. 5.3.13: a) A unity gain compensated voltage feedback amplifier remains stable with
capacitive feedback, whilst in b) it is unstable. In contrast, the stability of a current feedback
amplifier is upset by either a) Gf in parallel with Vf , or b) Ge in parallel with Ve.
A (s )
Re
Rf
The noise gain is inverting in phase, but equal in value to the non-inverting
signal gain:
EN
@o
Vf
"
@N
Ve
- 5.73 -
(5.3.43)
P.Stari, E.Margan
For the voltage feedback amplifier the closed loop bandwidth is equal to the
unity gain bandwidth frequency and the noise gain:
0cl
01
lEN l
(5.3.44)
45
f
10
B/
0d 0
9
phase
slope
0
0
45
f
/10
dB
40 180
f 35
10 1
B/
0d 90
2
45
log f
Fig. 5.3.15: An arbitrary example of the phase angle estimated from the
gain magnitude, its slopes and various breakpoints. If two breakpoints are
relatively close the phase would not actually reach the value predicted from
the slope value, but an intermediate value instead.
In the same manner, along the amplifier open loop gain asymptotes, we draw
the noise gain, as in Fig. 5.3.16, and we look at the crossover point of these two
characteristics. Two important parameters can be derived from this plot: the first is
the amount of gain at the crossover frequency 0c ; the second is the relative slope
difference between the two lines at 0c , which also serves as an indication of their
phase difference.
If the available gain at 0c is greater than unity the phase difference determines
the amplifier stability. The feedback can be considered negative and the amplifier
operation stable if the loop phase margin is at least 45; this means that, if a 360
phase shift is positive, the maximum phase shift within the feedback loop must
always be less than 315 (if Ea0c b " ). Since the inverting input provides 180, the
total phase shift of the remaining amplifier stages (secondary poles) and the feedback
network should never exceed 135. Note also that a phase margin of 90 or more
results in a smooth transient response; for a phase margin between 90 and 45 an
increasing amount of peaking would result.
- 5.74 -
P.Stari, E.Margan
A (s )
N
| A( f ) |
Rf
Acl =
R f +R e
Re
fc
0
( Acl = 1 )
Re
=
R f +R e
Re
Cf
fs
log f
f0
1
2 R f Cf
1
2 R f || R eCf
Fig. 5.3.16: Voltage feedback amplifier noise gain is derived from the equivalent circuit noise
model (a noise generator in series with the input of a noiseless amplifier). The Bode plot shows
the relationships between the most important parameters (lEa0 bl is the open loop gain
magnitude, 0! is the dominant pole, and 0s is the secondary pole, owed to the slowest internal
stage). The inverse of the feedback attenuation " is the noise gain and it is equivalent to the
amplifier closed loop gain Ecl . Note that the noise gain is flat up and beyond the open loop
crossover 0c , owed to the zero at "#1Ve llVe Gf . The amplifier is stable since the noise gain
crosses the open loop gain at a point where their slope difference is 20 dB100 . If the amplifier
open loop gain was higher, the gain at the secondary pole (at 0s ) would be higher than unity and
the slope difference would be 40 dB100 . Then, the increased phase (135 at 0s and
approaching 180 above), along with the 180 of the amplifier inverting input, would make the
feedback positive (p360) and the amplifier would oscillate.
Now, let us find the noise gain of the voltage feedback amplifier in Fig. 5.3.16.
Note that while there is some feedback available the amplifier tries to keep the
difference between the inverting and non-inverting input as small as its open loop
gain allows; so, with a high open loop gain, the input voltage difference tends to be
zero (plus the DC voltage offset).
Note also that, in order to keep track of the phase inversion by the amplifier,
we have added polarity indicators to the noise generator. If the + side of the noise
generator @N tries to push the inverting input positive, the output voltage must go
negative to compensate it.
With Gf in parallel with Vf we shall have:
"
@o
Vf
Gf Vf
"
"
@N
Ve
=
GV
f
(5.3.45)
@o
Gf Vf
"
@N
= G V
f
"
Gf V e
"
=
Gf V f
=
- 5.75 -
(5.3.46)
P.Stari, E.Margan
Here we have a pole at "Gf Vf and a zero at "Gf Ve . Eq. 5.3.46 is the noise
gain (and also the closed loop gain Ecl ; see the two distinct breakpoint frequencies in
Fig. 5.3.16). The inverse of this is the feedback attenuation " .
With the open loop gain as shown in Fig. 5.3.16 the amplifier is stable, since
the noise gain crosses over the open loop gain at 0c , where the open loop and closed
loop slope difference is 20 dBdecade, and the associated phase shift is (nearly) 90.
In addition to the 180 of the amplifier inverting input, the total phase angle is then
270. The minimum phase margin for a stable amplifier would be 45 and here we
have 90 (360 270), so the feedback can still be considered "negative".
However, if the open loop gain was higher (and if the poles remain at the same
frequencies) the gain at 0s (the frequency of the secondary pole) can be greater than
unity. In this case at the crossover of the noise gain and open loop gain the slope
difference will be 40 dB100 , with the associated phase of 135 at 0s and approaching
180 above. The feedback will become positive and, with the gain at 0s greater than
unity, the amplifier would oscillate.
In the case of the current feedback amplifier in Fig. 3.5.17 we first note that
instead of gain our Bode plot shows the feedback impedances and the amplifier
transimpedance, all as functions of frequency.
Intuitively speaking, a capacitance Gf in parallel to Vf would reduce the
impedance of the feedback network at high frequencies, thus also reducing the closed
loop gain. However, intuition is misleading us: since the current feedback system
bandwidth is inversely proportional to the feedback impedance in the f-branch (as
demonstrated by Eq. 5.3.24), the addition of Gf increases the bandwidth. By itself this
would be welcome, but note that at the crossover frequency 0c the slope difference
between the transimpedance ^T and the noise transimpedance (in analogy with
noise gain) is 40 dB100 , causing a phase shift of 180. This means that at 0c the
feedback current becomes positive and the amplifier will oscillate.
log | Z |
fT =
RT
i fb
Z T (s )
Re
| Z T|
R f 1+
1
2 RTCT
Ro
R f || R e
Rf
Rf
Cf
R o || R e
fz =
1
2 Rf Cf
fp =
1
2 Ro || Re Cf
| Z fb |
| Z N|
fT
fz
fc fp fs
log f
Fig. 5.3.17: For the current feedback amplifier we draw the impedances, not gain. The feedback
network impedance l^fb l, as seen from @o , is slightly higher than Vf at DC (owing to Vo , the
inverting input resistance) and falls to Vo llVe at high frequencies; its inverse (about Vf ) is the
amplifier noise transimpedance, l^N l. The feedback network pole becomes the zero of the noise
transimpedance: =z "Gf Vf (0z l=z l#1); likewise, the feedback zero becomes the noise
transimpedance pole =p "Gf aVo llVe b, (0p l=p l#1). At 0c the crossover with l^T l, the
slope difference is 40 dB100 and the relative phase angle is 180; the amplifier will inevitably
oscillate, even if the secondary pole is far away and its ^T breakpoint is well below Vf . The
dashed line is the transimpedance required for stability, realized by an V in series with Gf .
- 5.76 -
P.Stari, E.Margan
+
+1
i fb
+1
RT
CT
Ro
i fb
Re
Rf
i fb
Ro
Re
Rf
Fig. 5.3.18: The current feedback amplifier and its noise transimpedance equivalent, @o 3fb .
We can find the noise transimpedance as simply as we found the noise gain
for voltage feedback amplifiers. By assuming that the feedback current is noise
generated, from the equivalent circuit in Fig. 5.3.18 we calculate the ratio of the
output voltage and the feedback current:
@o
Vo
Vo Vf "
3fb
Ve
(5.3.47)
"
Gf Vf
"
=
Gf Vf
"
Vo
Ve
(5.3.48)
- 5.77 -
P.Stari, E.Margan
R
R
VFA
CFA1
CFA2
Integrator (inverting)
C
R
Rf
Rf
s
VFA
CFA
fh =
Single-Pole Amplifier (inverting)
R
C1
Rf
C3
2R
C2
C2
VFA
R
C1
CFA
C3
1
2 R f CT
2R
Cc
Cc
Rf
Re
Cd
Cd
VFA
CFA
Rf << R
Re = Rf / 10
Fig. 5.3.19: Functionally equivalent circuits with conventional and current feedback amplifiers.
Integrators, filters and current to voltage converters in inverting configurations cannot be achieved
using a single CF amplifier. However, two-amplifier circuits can provide inherent amplifier pole
compensation, which is very important at high frequencies. Filters can be realized in the noninverting configuration. And feedback capacitance can be isolated by a resistive divider.
- 5.78 -
P.Stari, E.Margan
CFA
Rs
Rf
i fb
Cs1
Cs2
Rb
Re
Fig. 5.3.20: This circuit exploits the ability of current feedback amplifiers to
adjust the bandwidth and gain peaking independently of the gain. The price to pay
is the lower slew rate limit. See the frequency and the step response in Fig. 5.3.21.
10
6
5
o
A (f )
3
2
s
1
0
1
1M
10M
100M
f [Hz]
1G
10G
-1
4
t [ns]
Fig. 5.3.21: a) Frequency response; b) Step response of the amplifier in Fig. 5.3.20. The closed loop
gain Ecl " Vf Ve %, Vf "&! H, Ve &! H, the source resistance Vs &! H, the stray
capacitances, Gs1,2 " pF, the amplifier transcapacitance GT " pF, while Vb is varied from "&! H
for highest to (&! H for lowest peaking.
- 5.79 -
P.Stari, E.Margan
Note, however, that in this way we lose the current on demand property of the
CFB amplifier, since Vb will reduce the slew rate.
In a similar manner as was done for passive circuits in Part 2 and in Sec. 5.1,
the resulting gain peaking can be used to improve the step response of a multi-stage
system. As shown in Fig. 5.3.21, the gain peaking reveals the amplifier resonance,
which decreases with increasing Vb , while the DC gain remains almost unchanged.
5.3.7 Improved Voltage Feedback Amplifiers
The lessons learned from the current feedback technology can be used to
improve conventional voltage feedback amplifiers.
Besides the improved semiconductor manufacturing technology, basically
there are two approaches: one is to take the voltage feedback amplifier and modify it
using the techniques of current feedback to avoid its week points. One such example
is shown in Fig. 5.3.22. The other way, like the circuit in Fig. 5.3.23, is to take the
current feedback amplifier and modify it to make it appear to the outside world as a
voltage feedback amplifier.
V cc
R1
Rs
R2
Vbb
Q1
Q2
Q4
Q3
Q5
Q6
I1
s
Cc
V ee
fb
Re
+1
Rf
Fig. 5.3.22: The voltage feedback amplifier, improved. The transistors U"% form a differential
folded cascode, which drives the current mirror U&,' . In this way the input is a conventional high
impedance differential, but the dominant pole compensation capacitor Gc is grounded, eliminating
the Miller effect. This circuit still exhibits slew rate limiting, although at much higher frequencies.
Typical examples of this configuration are Analog Devices AD-817 and BurrBrowns OPA-640.
The differential folded cascode U"% and the current mirror U&,' of Fig. 5.3.22
can be modeled by a transconductance, gm , driven by the input voltage difference,
?@ @s @fb . Here @s is the signal source voltage and @fb is the feedback voltage,
derived from the output voltage @o and the feedback network divider, Ve aVf Ve b.
The current 3 ?@ gm drives the output buffer and the capacitance Gc :
@o 3
"
"
Ve
"
gm a@s @fb b
gm @s @o
= Gc
= Gc
Vf V e
= Gc
(5.3.49)
Ve
gm
gm
@s
= Gc
= Gc
Vf Ve
- 5.80 -
(5.3.50)
P.Stari, E.Margan
gm
Ve
Gc
Vf Ve
gm
Ve
=
Gc
Vf V e
(5.3.51)
(5.3.52)
we see that the closed loop gain is, as usual, Ecl " Vf Ve , whilst the amplifier
closed loop pole is =" Ve gm Gc aVf Ve b, and therefore the cut off frequency is
an inverse function of the closed loop gain, just as in voltage feedback amplifiers.
A similar situation is encountered in Fig. 5.3.23, where the current 3b charging
GT is set by the input voltage difference and Vb : 3b ?@Vb .
M1
+
Rs
+1
A1
Re
Rb
ib
Vcc
A2
+1
ib
+1
A3
CT
fb
Rf
Vee
M2
Fig. 5.3.23: The basic current feedback amplifier (E" , E$ , Q" , Q# ) is improved by
adding another buffer, E# , which presents a high impedance to the feedback divider, Vf
and Ve ; an additional resistor, Vb , now takes the role of converting the voltage feedback
into current and provide bandwidth setting. Like the original current feedback amplifier,
this circuit is also (almost) free from slew rate limiting. However, the closed loop
bandwidth is, as in voltage feedback amplifiers, gain dependent. A typical representative
of this configuration is Analog Devices OP-467.
"
"
Ve
"
?@
@s @o
= GT
= G T Vb
Vf V e
= GT V b
Vf
@o
"
@s
Ve
Ve
"
Vf V e
G T Vb
Ve
"
=
Vf V e
G T Vb
(5.3.53)
(5.3.54)
The closed loop gain is the same as in the previous case, whilst the pole is
=" Ve GT Vb aVf Ve b, so the closed loop cutoff frequency is again an inverse
function of the closed loop gain.
- 5.81 -
P.Stari, E.Margan
Vcc
Q7
Q9
Q1
R1
Q3
R3
R5
R2
Q2
Ccb
+1
R4
Q4
Ccb
Q10
Q6
Q8
Vee
Fig. 5.3.24: An interesting combination of circuits in Fig. 5.3.22 and 5.3.23 is the so
called Quad Core structure, [Ref. 5.32]. Here both the inverting and the non-inverting
input buffer currents are combined by the current mirrors to drive the Gcb of U9,10. The
non-labeled transistors provide the Zbe compensation for U"% . Typical representatives
are Analog Devices AD-8047, AD-9631, AD-8041 and others.
The current available to charge the Gcb capacitances of U*,"! is set by the
input voltage difference and V& . This current is effectively doubled by the input
structure, thus increasing the bandwidth, the loop gain, and linearity. A further
bandwidth improvement is achieved by the low impedance of U(,) , which are
practically fully open and so provide a tight control of the U*,"! base voltages,
reducing the Miller effect considerably. The circuit behaves as a voltage feedback
amplifier with the advantages of low offset and high loop gain and with a bandwidth
and slew rate limiting close to that of current feedback amplifiers.
The output buffer stage can also be improved for greater current handling
efficiency. An example is shown in Fig. 5.3.25.
Here the collectors of U#,$ and U",% are summed and mirrored by U&,( and
U',) , respectively, and finally added to the output load current. With appropriate bias
this scheme allows a reduction of the quiescent power supply current to just one third
- 5.82 -
P.Stari, E.Margan
of the conventional buffer, whilst not compromising the full power bandwidth. At the
same time, the circuit has a comparable loading capability and offers better linearity.
Vcc
Q5
I b1
Q7
Q3
Q1
d
Q2
Q4
Q6
I b2
Q8
Vee
Ro
io
Rf
V ee
fb
Re
iL
CL
RL
Fig. 5.3.26: Owing to the non-zero output impedance a capacitive load adds another pole
within the feedback loop. If the closed loop gain is too low the resulting increase in phase can
make the feedback positive at high frequencies, instead of negative, destabilizing the amplifier.
@L @o HV
=L
= =L
(5.3.55)
- 5.83 -
P.Stari, E.Margan
Here HV is the resistive divider formed by the output resistance Vo , the load
resistance VL and the total resistance of the feedback divider Vf Ve :
VL aVf Ve b
VL Vf Ve
HV
VL aVf Ve b
Vo
VL Vf Ve
(5.3.56)
The pole =L is formed by the load capacitance GL and the equivalent resistance seen
by it, Vq , whilst =L is the appropriate cut off frequency:
=L
"
;
Vq GL
=L l=L l
(5.3.57)
Vq is simply the parallel combination of all the resistances at the output node:
Vq
"
"
"
"
Vo
VL
Vf V e
(5.3.58)
The internal output voltage, @o , is a function of the input voltage difference, ?@, and
the amplifier open loop gain E, which, in turn, is also a function of frequency, Ea=b:
@o Ea=b ?@
(5.3.59)
The input voltage difference is, of course, the difference between the signal source
voltage and the output (load) voltage, attenuated by the feedback resistors:
?@ @s @L
Ve
Vf V e
(5.3.60)
The open loop gain Ea=b is defined by the DC open loop gain E! and the frequency
dependent term owed to the amplifier dominant pole at the frequency =! :
Ea=b E!
=!
= =!
(5.3.61)
=!
Ve
@s @L
= =!
Vf V e
(5.3.62)
and by inserting this back in Eq. 5.3.53 we have the load voltage:
@L E !
=!
=L
Ve
@s @L
HV
= =!
Vf V e
= =L
(5.3.63)
=!
=L
=!
=L
Ve
E!
HV
HV
@s E!
Vf Ve
= =!
= =L
= =!
= =L
(5.3.64)
- 5.84 -
P.Stari, E.Margan
@s
=! =L
a= =! ba= =L b
Ve
=! = L
"
E! HV
Vf Ve
a= =! ba= =L b
E! HV
(5.3.65)
@s
Ve
=! =L
Ve
E! HV
a= =! ba= =L b
Vf Ve
=! =L
Ve
"
E! HV
a= =! ba= =L b
Vf Ve
(5.3.66)
@s
Ve
Ve
E ! H V =! = L
Vf V e
Ve
=# =a=! =L b =! =L "
E! HV
Vf V e
(5.3.67)
The product of the poles, =" =# , is a function of not just =! and =L , but also of
the open loop gain E! and the closed loop feedback dividers, HV and Ve aVf Ve b
(refer to Appendix 2.1 to find the system poles of a 2nd -order function). Since the
output resistance, Vo , is usually much lower than both the load resistance VL and the
feedback resistances Vf Ve , the output divider HV is usually between 0.9 and 1.
The systems stability is therefore dictated by the amount of loop gain when = p =L .
Thus close to =L the loop gain will be higher than 1 either if E! is very high, or if =L
is relatively low and Vf p !, that is, if the closed loop gain approaches unity!
This is often counter-intuitive, not just to beginners, but sometimes even to
experienced engineers. Usually, if we want to enhance the amplifiers stability, we
increase the feedback at high frequencies by placing a capacitor in parallel to Vf .
However, in the case of a capacitive load the amplifier will be turned into an
oscillator by that procedure. In contrast, the stability will improve if we increase the
closed loop gain (increase Vf or decrease Ve ). This is illustrated in Fig. 5.3.27, where
the gain and the phase are plotted for the three values of Vf (_, 4Ve and 0) and the
capacitive load is such, that the loop gain at 0L # "!% 0! is about 3.
Conventional opamp compensation schemes (consisting usually of a resistor
or a series VG network, connected between both inputs, thus increasing the noise
gain, without affecting the signal gain) increase the stability at the expense of either
the gain, or the bandwidth, or both! Conventional compensation should be used as the
last resort only, when the circuit must meet an unknown load capacitance, which can
vary considerably.
In fixed applications, when the capacitive load is known, or is within a narrow
range, it is much better to compensate the load by inductive peaking, as we have seen
in Part 2. The simplest approach is shown in Fig. 5.3.28, where the load impedance
appears real to the amplifier output, so that the feedback is not frequency dependent.
- 5.85 -
P.Stari, E.Margan
10 6
a)
b)
c)
10 5
10 4
10
10
s 10
90
[ ]
Ma
0
a
Mb
Mc
10 0
10 1
10 2
10
180
Rf =
R f = 4 Re
Rf = 0
Re
10 4
101
A( f )
10 0
90
Rf
CL
RL
10 1
10 2
10 3
180
10 4
10 5
270
10 6
f0
Fig. 5.3.27: An amplifier driving a capacitive load can become unstable if its close loop gain is
decreased too much: a) with no feedback and b) with a gain of 5, the amplifier is stable, although
in the latter case there is already a pronounced peaking; whilst in c) with the closed loop gain
reduced to 1 the peaking is very high and the phase goes over 360 and oscillation is excited at
the frequency at which the phase equals 360.
Ro
io
Rf
Re
L c = RL2 C L
V cc
Rd = RL
iL
V ee
fb
CL
RL
Fig. 5.3.28: Capacitive load compensation which makes the load to appear
real and equal to VL to the opamp. This works for fixed load impedances.
Here the inductance Pc and its parallel damping resistance, Vd , are chosen so
that Vd VL and Pc VL# GL , and the amplifier sees an impedance equal to VL from
DC up to the frequency at which the coil stray capacitance starts to influence the
compensation. With a careful inductance design the frequency at which this happens
can be much higher than the critical amplifier frequency.
However, even with such compensation, the bandwidth can be lower than
desired, since the compensation network forms a low pass filter with the load, and the
value of the inductance is influenced by both the load resistance VL and the load
capacitance GL .
- 5.86 -
P.Stari, E.Margan
@F
"
"
=
Pc G L
VL G L
#
"
=# =
VL G L
Pc G L
(5.3.68)
The cut off frequency is =h "Pc GL "VL GL and that is much lower
than =L of Eq. 5.3.57, which would apply if the amplifier could be made stable by
some other means. If VL and GL can be separated, it is possible to build a 2-pole
series peaking or a T-coil peaking, tuned to form a 3-pole system together with the
pole associated with the amplifier closed loop cut off. This procedure is similar to the
one described in Part 2 and also in Sec. 5.1, so we leave it as an exercise to the reader.
Another compensation method, shown in Fig. 5.3.29, is to separate the AC and
DC feedback path by a small resistance Vc in series with the load and a feedback
bridging capacitance Gc :
V cc
Ro
Rc
io
V ee
ACfb
iL
Cc
Rf
DCfb
Re
CL
RL
Cc Rf Rc C L
Fig. 5.3.29: The capacitive load is separated from the AC feedback path by a small
resistor Vc in series with the output; owing to the capacitance Gc this type of
compensation can be applied only to voltage feedback unity gain compensated amplifiers.
This type of compensation can be very effective, since very small values of Vc
can be used (515 H or so), lowering the bandwidth only slightly; however, due to the
feedback bridging capacitance Gc , it can be applied only to voltage feedback unity
gain compensated amplifiers; it can not be used for current feedback amplifiers.
A more serious problem is the fact that, in some applications, the load
capacitance would vary considerably; for example, some types of fast AD converters
have their input capacitance code dependent (thus also signal level dependent!). It is
therefore desirable to design the amplifier output stage with the lowest possible output
resistance and employ a compensation scheme which would work for a range of
capacitance values.
Fig. 5.3.30 shows the implementation found in some CFB opamps, where the
compensation network, formed by Gc and Vc , is in parallel with the output buffer.
With a high impedance load the voltage drop on the output resistance Vo is small and
the current through the compensation network is low; but with a capacitive load or
other low impedance load the output current causes a high voltage drop on Vo , and
consequently the current through Gc increases. Effectively the series combination of
- 5.87 -
P.Stari, E.Margan
Gc and GL is added in parallel to GT , thus lowering the system open loop bandwidth
in proportion with the load and keeping the loop stable.
V cc
Qa
i fb
RT
Qb
( T) R o
+1
CT C
c
log| Z |
i fb
Rc
io
ic
| ZT|
dynamic
copmensation
iL
CL
RL
Rf
1
2 Ro C L
V ee
log f
Fig. 5.3.30: The output buffer with a finite output resistance Vo would, when driving a capacitive
load GL , present an additional pole within the feedback loop (taken from @o ), which could
condition the amplifier stability. The compensation network, formed by a serial connection of Gc
and Vc , draws part of the feedback current to the output node, effectively increasing GT in
proportion to the load, reducing the transimpedance and preserving the closed loop stability.
(5.3.69)
@T @o
3o V o
^c
^c
(5.3.70)
The transimpedance ^T is driven by the feedback current 3fb ; the voltage @T , which in
an ideal case would be equal to 3fb ^T , is now lower, because part of the current is
stolen by the compensation network ^c :
@T a3fb 3c b^T
(5.3.71)
^T
"
^c
(5.3.72)
This equation shows that the original transimpedance equation (Eq. 5.3.18) is
modified by the output current and the ^T ^c ratio. Thus a capacitive load, which
would draw high currents at high frequencies (or at the step transition), will
automatically lower the system open loop cut off frequency. Consequently the gain at
high frequencies is reduced so that the closed loop crossover remains well above the
secondary pole (created by Vo and GL ).
Note that the distortion at high frequencies of the compensated amplifier will
be worse than that of a non-compensated one.
- 5.88 -
P.Stari, E.Margan
Re
Rf
R2
Vee
Fig. 5.3.31: The output level clipping is more precise if a biased Zener diode in a
Schottky diode bridge is controlling the feedback. However, this circuit can be
used only with voltage feedback unity gain compensated amplifiers.
- 5.89 -
P.Stari, E.Margan
Re
Rf
V cp
V cn
V ee
Fig. 5.3.32: The output buffer with a separate lower supply voltage can be used for
output signal clipping with current feedback amplifiers. Since feedback is lost during
clipping, the input stage must be protected from overdrive by anti-parallel Schottky
diodes, which, inevitably, increase the input capacitance.
Q7
Q3
Q12
VcH
Q5
Q9
Bi
+1
Bo
i fb
i fb
+1
CT
Q10
Q6
Q2
Q4
Q8
VcL
Q11
Vee
Rf
Fig. 5.3.33: Output signal clipping by limiting the internal GT node of a current feedback amplifier.
The transistors U58 are normally reverse biased. For positive voltages U',) (U&,( for negative) start
conducting only when the voltage at GT reaches ZcH (ZcL ).
- 5.90 -
P.Stari, E.Margan
Transistors U"% form the two current mirrors, which reflect the feedback
current 3fb from the inverting input (the first buffer output) into the transimpedance
node (at GT ). Transistors U& and U( are normally reverse biased and so are the BE
junctions of U' and U) . The transistors U&,( start to conduct when the GT voltage
(and consequently the output voltage @o ) falls below ZcL . Likewise, the transistors
U',) conduct when the GT voltage exceeds ZcH . When either of these transistors
conduct, they diverge the mirrored current 3fb to one of the supplies. Note that the
voltages which set the clipping levels can be as high as the supply voltage. Also, they
can be adjusted independently, as long as ZcH ZcL . Since only two transistors at a
time are required to switch on or off, the limiting action, as well as the recovery from
limiting, can be very fast.
The most common misconception of overdrive recovery, even amongst more
experienced engineers, is the belief that short switching times can be achieved only if
the transistors are prevented from being saturated by artificially keeping them within a
linear signal range. It often comes as a surprise if this does not solve the problem or,
in some cases, can make it even worse.
It is true that adding Schottky diodes to a TTL gate makes it faster than
ordinary TTL, and the inherently non-saturating ECL is even faster. Fast recovery is
ultimately limited by the so called minority carrier storage time within the
semiconductor material, and it depends on the type and concentration of dopants
which determine the mobility of minority charge carriers. However, in analog circuits
the problem is radically different from digital circuits, since we are interested in not just
how quickly the output will start to catch up with the input, but rather how quickly it
will follow the input to within 0.1 %, or even 0.01 %. In current state of the art
circuitry, the best recovery times are < 5 ns for 0.1 % error and < 25 ns for 0.01 %.
In this respect it is the thermal tails that are causing trouble. Wideband
circuits need more power than conventional circuits, so good thermal balance is
critical. Careful circuit design is required in order to keep temperature differences
low, both during linear and non-linear modes of operation.
To some extent, we have been dealing with thermal problems in Part 3. There
we were discussing two-transistor circuits, such as the cascode and the differential
amplifier. But the problem in multi-transistor circuits is that, even if it is differentially
or complementary symmetrical, only one or two transistors will be saturated during
overdrive, the rest of the circuit will either remain linear or cut off, which in this last
case means cooling down. In saturation there is a low voltage across the device
(usually a few tens of mV), so, even if the current through it is large, the power
dissipation is low. Inevitably this results in different thermal histories of different
parts of the circuit.
In integrated circuits the temperature gradients across the die are considerably
lower than those between transistors in discrete circuits, but complex circuits can be
large and heat conduction can be limited, so designers tend to reduce the power of
auxiliary circuitry which is not essential for high speed signal handling. Therefore, hot
spots can still exist and can cause trouble. Another important factor is gain switching
and DC level adjustment, which must not affect the thermal balance, either because of
amplifier working point changes or because of the control circuitry.
- 5.91 -
P.Stari, E.Margan
Circuits which rely on feedback for error correction can be inherently less
sensitive to thermal effects (except for the input differential transistor pair!).
However, the feedback stability, or, more precisely, the no feedback stability during
overdrive, can cause identical or even worse problems. If there is insufficient damping
during the transition from saturation back to the linear range, long ringing can result,
impairing the recovery time. Such problems are usually solved by adding normally
reverse biased diodes, which conduct during the saturation of a nearby transistor,
allowing the remaining part of the circuit some control over critical nodes.
We will review a few possible solutions in the following section.
- 5.92 -
P.Stari, E.Margan
P.Stari, E.Margan
forgotten, then reinvented from time to time, only to be rediscovered by the broader
engineering community in 1975, when the Quad 405 audio power amplifier came onto
the market [Ref. 5.42 and 5.43]. Following the presentation article by the 405s
designer, Peter J. Walker, the idea was refined and generalized by a number of
authors, among the first by J. Vanderkooy and S.P. Lipshitz [Ref. 5.44].
Later M.J. Hawksford [Ref. 5.46] showed that between the two extremes (pure
feedback on one end and pure feedforward on the other) there is a whole spectrum of
solutions combining both concepts. Moreover, such solutions can be applied both at
system level (like the Quad 405 itself or similar solutions, as in [Ref. 5.48]) or down
to the transistor level (as in [Ref. 5.47]).
It is interesting that while there were several examples of feedforward
application in the field of RF communications (some even before 1975), most of the
theoretical work was done with audio power amplification in mind. Apparently it took
some time before the designers of high speed circuits grasped the full potential and the
inherent advantages of the feedforward error correction techniques. In a way, this
situation has not changed much, for even today we meet feedforward error correction
mostly in RF communications systems and top level instrumentation. At the IC level
we find feedforward only as a method of phase compensation (bypassing a slow interstage, not error correction), mostly in older opamps. From 1985 the advance in
semiconductor processing has been extremely fast, discouraging amplifier designers
from seeking more exotic circuit topologies.
Before we examine the benefits of the feedforward technique for wideband
amplification we shall first compare the feedback and feedforward principles from the
point of view of error correction.
5.4.1 Feedback and Feedforward Error Correction
Fig. 5.4.1 shows the comparison of amplifiers with feedback and feedforward
error correction. The feedforward amplifier is shown in its simplest form later we
shall see other possible realizations of the same principle.
A1(s )
A (s )
Rf
ZL
Rf
Re
ZL
Re
a) Feedback
A2( s )
b) Feedforward
Fig. 5.4.1: Comparison of amplifiers with feedback and feedforward error correction. The
feedback amplifier has excess gain, E=, which is reduced to the required level by taking the
output voltage, suitably attenuated (" ), to the inverting input of the differential amplifier (hence the
name negative feedback). The feedforward case, in its most simple form, has two amplifiers: the
main amplifier E" = is assisted by the auxiliary amplifier E# =, which takes the difference
between the input voltage and the attenuated main amplifier output forward to the load.
- 5.94 -
P.Stari, E.Margan
The analysis of the feedback amplifier has already been presented in Sec. 5.3,
but we shall repeat some expressions in order to correlate them with the feedforward
amplifier. From Fig. 5.4.1a we can write:
@o a@s " @o b Ea=b
(5.4.1)
Ea=b
" " Ea=b
(5.4.2)
The fraction on the right hand side is the amplifier closed loop gain Kfb ; it can be
rewritten in such a form, from which it is easily seen that the gain expression can be
approximated as "" if E= is large:
Kfb
Ea=b
" "Ea=b
"
"
Vf
"
"
" Ea=bp_
Ve
"
Ea=b
(5.4.3)
(5.4.5)
and, considering that " E! ", the closed loop cut off frequency is:
E!
Kfb
(5.4.6)
So the closed loop cut off frequency of a voltage feedback amplifier is inversely
proportional to the closed loop gain.
log A
A ( f ) = A0
A0
Gfb( f ) =
| A ( f )|
R +R
1
= f e
Re
f0
j f + f0
fh
1
j f + fh
fh = f0 ( A0 1)
| Gfb( f ) |
0
log f
| A ( f )| = 1
f0
fh
fc
Re
=
R f +R e
Fig. 5.4.2: For a voltage feedback amplifier the closed loop frequency response Kfb a0 b
depends on the open loop gain E! , its dominant pole 0! and the feedback attenuation factor " .
The transition frequency 0c is equal to the amplifier gain bandwidth product (but only if the
amplifier does not have a secondary pole close to or lower than 0c ).
- 5.95 -
P.Stari, E.Margan
For the feedforward amplifier, Fig. 5.4.1b, we must first realize that the load
voltage, @o , is the difference of the output voltages of individual amplifiers:
@o @" @#
(5.4.7)
(5.4.8)
(5.4.9)
(5.4.10)
(5.4.11)
(5.4.12)
"
"
"
E# "
E#
"
"
"
(5.4.13)
So, whatever the actual value of the auxiliary amplifier gain E# , the systems
gain Kff will be equal to "" if we can make E" "" . Note that we have not
requested any of the two gains to be very high, as we were forced to for feedback
amplifiers, therefore this result is achieved without any approximation! True, if E" is
frequency dependent and " is not, at high frequencies Eq. 5.4.12 would not hold and,
consequently, Eq. 5.4.13 would not be so simple.
However, when E" starts to fall with frequency the factor " E" E# is also
reduced by the same amount, and the appropriate part of E# compensates the loss.
This would be so as long as the gain E# remains constant with frequency (and even
beyond its own cut off, provided that there still is enough gain for correction!).
Thus feedforward (in principle) achieves the dream goal:
a zero error, high cut off frequency gain, using non-ideal amplifiers!
We shall, of course, still have to deal with component tolerances, temperature
dependences, uncontrollable strays, time delays, etc., but with a manageable effort all
these factors can be minimized, or, at least, kept below some predictable limit.
- 5.96 -
P.Stari, E.Margan
If you now think that there are no more surprises with feedforward amplifiers,
consider the following points:
Eq. 5.4.11 is, in a sense, symmetrical, thus Kff "" (as in Eq. 5.4.13) can be
achieved also if we decide to make:
" E# "
(5.4.14)
However, the advantage of making " E" " is that the input signal is
canceled at the auxiliary amplifier differential input (remaining as a common mode
signal only). In contrast with the output balance condition represented by Eq. 5.4.14,
Eq. 5.4.12 represents the so called input balance condition, in which the auxiliary
amplifier needs only a very low level amplitude swing at low frequencies (but rising
to "# amplitude and higher at E" cut off and beyond, respectively). It therefore
processes only the errors of the main amplifier, canceling them at the load and leaving
only those of the auxiliary amplifier; as errors in processing the main amplifier error,
those are secondary errors only.
It is also possible to make both gains equal to "" :
E" E# ""
(5.4.15)
="
=#
=" =#
E!#
" E!# E!"
= ="
= =#
a= =" ba= =# b
(5.4.17)
where E!" and E!# are the main and auxiliary amplifier DC gain, respectively. By
choosing E!" "" we get:
Kff
"
="
=#
="
E!#
"
= =#
= ="
" = ="
"
="
=#
=
E!#
" = ="
= =#
= ="
- 5.97 -
(5.4.18)
P.Stari, E.Margan
Note that the auxiliary amplifier gain is effectively multiplied by the high pass
version of the main amplifiers frequency dependence. If we now decide to make
E!# "" also, we obtain:
Kff
"
="
=#
=
"
= ="
= =#
= ="
"
="
=#
=
"
" = ="
=" = =#
(5.4.19)
"
="
=" =
" = ="
a= =" b#
(5.4.20)
The second fraction represents a second-order band pass response, which will add
some gain peaking and extend the bandwidth by a factor of almost 3:
10
Gff
3dB
A1
10
A2
10
10
f / f1
10
10
Fig. 5.4.3: The feedforward amplifier bandwidth Kff is highest (and optimized in
the sense of lowest gain bandwidth requirement of the auxiliary amplifier) if both
amplifiers have the same bandwidth and the gains equal to "" . In this figure
"" "!, E!" "!, and E!# "" in order to distinguish E# from E" more
easily. If =" =# the Kff bandwidth will be lower.
WBT
B
`T
T
`B
(5.4.21)
- 5.98 -
P.Stari, E.Margan
For the feedback amplifier we want to know the influence of variations in the
amplifier open loop gain E to the closed loop gain Kfb , as defined by Eq. 5.4.3:
WEKfb
E
`Kfb
Kfb
`E
a" " Eb
E
" "E
E
" "E
`E
"E
"E
"
" " "E
" "E
a" "Eb#
"
!
" "E
Ep_
(5.4.22)
This means that the gain sensitivity is low only if E is very high. We also want to
know the influence of variations of the feedback attenuation " :
W"Kfb
`Kfb
"
Kfb
`"
"
E
" "E
E
" "E
`"
E
a" "Eb#
"E
"
" "E
Ep_
(5.4.23)
In the case of the feedforward amplifier, the influence of E" and E# on the
system gain, as well as the influence of ", using Eq. 5.4.11 for Kff , is:
WEK"ff
Kff
`E"
E" a " " E # b E #
!
if " E# "
(5.4.24)
WEK#ff
!
if " E" "
E#
`Kff
E# a" " E" b
Kff
`E#
E# a" "E" b E"
" " E" if "E# "
(5.4.25)
W"Kff
It is evident that the second condition in Eq. 5.4.24 and the first in Eq. 5.4.25,
as well as the third condition in Eq. 5.4.26, are the same as for the feedback amplifier.
However, remember that for the feedback amplifier the results belong to the ideal case
for which Ep_, so in practice they can be approximated only, whilst for the
- 5.99 -
P.Stari, E.Margan
feedforward amplifier they can be realized without any approximation (but within the
limits of the specified component tolerances).
In a similar way we can determine the error reduction. For the feedback
amplifier we have:
&fb
"
!
Ep_
&E
" "E
(5.4.27)
and again, zero distortion is achievable only in the idealized case of infinite gain.
In contrast, for the feedforward amplifier we have:
&ff
" " E# !
&E"
" E# "
(5.4.28)
and this extraordinary result can be realized (not only approximated!) to whatever
degree of precision we are satisfied with (accounting also for the technology cost).
Also, we must not forget that the open loop gain of the feedback amplifier
decreases with frequency, so, for a given E0 , the theoretically achievable maximum
error reduction, "" " E! , is obtained only from DC up to the frequency of the
dominant pole, 0" ; beyond 0" the error increases proportionally with frequency.
In contrast, feedforward amplifiers offer the same level of error reduction from
DC up to the full feedforward system bandwidth and even beyond!
5.4.3 Alternative Feedforward Configurations
The main drawback of the feedforward amplifier in Fig. 5.4.1b is the floating
load (between the outputs of both amplifiers); in most cases we would prefer a ground
referenced load, instead.
We have already noted that the output impedance of the main amplifier is not
reduced by feedforward action; in fact, it does not need to be low in order to achieve
effective error canceling. This leads to the idea of summing passively the two outputs,
with that of the auxiliary amplifier inverted in phase:
A1( s )
Z3
Rf
ZL
Re
A2 ( s )
Z4
Fig. 5.4.4: Grounded load feedforward amplifier. Note the inverted input polarity of the
auxiliary amplifier, compared with the circuit in Fig. 5.4.1b. This allows passive signal
summing over the output impedances ^$ and ^% .
- 5.100 -
P.Stari, E.Margan
@" @ o
^$
3#
and
@# @ o
^%
(5.4.29)
so that:
@o ^L a3" 3# b
(5.4.30)
^L
^$
^%
@
@#
^$ ^% ^$ ^% "
^$ ^ %
^L
^$ ^%
(5.4.31)
^L
^$ ^ %
^L
^$ ^%
^$
^$ ^%
(5.4.32)
@o
^%
+
aE # " E " E # b
+ E "
"
@s
^$
(5.4.33)
Because of the passive summing, the correct balance condition and error
canceling for this circuit is achieved when the two output voltages are in the inverse
ratio as the impedances:
@#
^$
@"
^%
(5.4.34)
(5.4.35)
@"
^%
" E#
^$
^%
(5.4.36)
On the other hand, the input balance condition, " E" ", because of
Eq. 5.4.34 and 5.4.36, results in the gain ratio equal to the impedance ratio:
E#
^$
E"
^%
(5.4.37)
- 5.101 -
P.Stari, E.Margan
The auxiliary amplifier will, under this condition, draw considerable current,
even without the load. To reduce the current demand, we have to give up the input
balance condition. If we set @# @" then there would be no current if ^L _, and
by choosing ^$ ll^% ^L the current demand is reduced for the nominal load:
@# E#
@"
" @" @ "
E"
(5.4.38)
(5.4.39)
"
E"
^%
(5.4.40)
and this should be compared with the simple balance condition in Eq. 5.4.37.
Another configuration, known as the error take off, by Sandman [Ref. 5.49],
is shown in Fig. 5.4.5. Here both the main and the auxiliary amplifier are of the
negative feedback type; however, the auxiliary amplifier senses both the distortion
and the gain error from the main amplifier feedback input and delivers it to the load in
the same feedforward passive summing manner.
Z1
Zi
Z4
A1 ( s )
Re
Z2
ZL
2
Z3
A2 ( s )
Fig. 5.4.5: Error take off principle. The error ?@ of the main amplifier, left by feedback,
is taken by the auxiliary amplifier and fed forward to the load, where it is passively
summed to the main output. The impedances ^" to ^% form the balancing bridge.
With an ideal main amplifier the voltage at its inverting input would be at a
(virtual) ground potential; any signal ?@ present at this point represents an attenuated
version of the main amplifier error (gain error and distortion). If adequately amplified,
it can be added to the main output to cancel the error at the load:
?@ " &
^i
&
^i ^ "
(5.4.41)
Ve ^ i ^ "
^%
(5.4.42)
- 5.102 -
P.Stari, E.Margan
A variation of this circuit is shown in Fig. 5.4.6, which actually represents the
original Quad 405 current dumping amplifier configuration, and we now see how it
follows from both the error take off circuit and the pure feedforward circuit. If the
balance condition is achieved E# must compensate whatever the amount of error at
the E" output. It is important to realize that the input signal of E" can be taken from
any suitable point within the circuit (the only condition is that it should, preferably,
not be out of phase); the E# output represents just such a convenient point. The
balance condition for the 405 is:
^$
^#
^%
^"
(5.4.43)
and the main amplifier error is canceled. Again, the impedances ^n can be real or
complex, whichever combination satisfies this equation.
Z1
Ri
Z4
A1( s )
ZL
i
s
Z3
Z2
2
A2( s )
Fig. 5.4.6: Current dumping: by taking the main amplifier input signal (@x) from the
auxiliary amplifier (@# ), we obtain the original Quad 405 amplifier configuration.
Although it is effective in error cancellation, its main disadvantages are the requirement
for large voltage at the auxiliary amplifier output and a relatively low cut off frequency. In
the Quad 405, ^" and ^$ are resistors, ^# is a capacitor and ^% is an inductor.
- 5.103 -
P.Stari, E.Margan
compensate it by altering the balance condition. However, for a square wave or a step
function, large spikes, equal to full amplitude difference, would result and these can
not be corrected by the balance setting components. Moreover, these spikes can
overdrive the input of the auxiliary amplifier and saturate its output; error correction
in such conditions is non-operating. Thus for high speed amplification some form of
time delay compensation is mandatory.
In Fig. 5.4.7 we see a general principle of time delay compensation. Since each
amplifier has its own time delay acting on a different summing node, at least two
separate time delay circuits are needed. Here, 7" compensates the delay of the main
amplifier, allowing the auxiliary amplifier input to perform the difference between the
input and "attenuated signal with the correct phase. Likewise, 7# does so for the
auxiliary amplifier delay for correct summing at the output.
1
A1( s )
Rf
o
Re
A2( s )
ZL
Fig. 5.4.7: Time delay compensation principle for the basic feedforward amplifier: 7"
compensates the delay introduced by the main amplifier and 7# compensates the delay
introduced by the auxiliary amplifier.
Me M s e 5 B X
Zbe
"
(5.4.44)
- 5.104 -
P.Stari, E.Margan
Me Ms e 5B X
Zbe
(5.4.45)
Me! Ms
For small signals, not altering the junction temperature considerably, we can assume
ZX 5B X ;e to be constant. So if we also neglect the dependence of the current gain
and Ms on temperature and biasing, as well as the internal resistance and capacitance
variations with the signal, we can express the non-linearity in form of the internal
emitter resistance:
<e
`Zbe
`Me
(5.4.46)
For the differential pair of Fig. 5.4.8 the effective resistance seen by their
baseemitter junctions is the sum:
<ed
`Zbe"
`Zbe#
`Me"
`Me#
(5.4.47)
which, since one increases and the other decreases with the signal, varies much less
over the much larger input signal range than in the single transistor case.
V cc
I c1
V b1
Rc
V c1
V cc
V c2
Rc
Ic2
Q2
Q1
I c1
V b2
V b1
Rc
V c1
Q1
Re
I e0
a)
V c2
Re
Rc
Ic2
Q2
V b2
I e0
b)
V ee
V ee
Fig. 5.4.8: a) A simple differential amplifier, showing the voltages and currents
used in the analysis. b) The same, but with the emitter degeneration resistors Ve .
For the amplifier in Fig. 5.4.8a we must first realize that the differential input
voltage is equal to the difference of the two Zbe junction voltages:
Zdi Zb" Zb# Zbe" Zbe#
(5.4.48)
Me
Ms
(5.4.49)
- 5.105 -
P.Stari, E.Margan
(5.4.50)
The collector current Mc !F Me ; also, the sum of emitter currents must be equal to the
bias provided by the constant current source, Me! . Thus:
Mc" Mc# !F aMe" Me# b !F Me!
(5.4.51)
!F Me0
" eZdi ZX
Mc#
and
!F Me0
" eZdi ZX
(5.4.52)
The collector voltage is equal to the potential drop Vc Mc from the supply voltage:
Zo" Zcc Vc Mc"
and
(5.4.53)
Therefore the differential output voltage will follow a hyperbolic tangent function of
the input differential voltage:
Zdo Zo" Zo# Vc aMc# Mc" b
!F Vc Me!
"
"
" eZdi ZX
" eZdi ZX
!F Vc Me! tanh
Zdi
ZX
(5.4.54)
1
Vdo
F Rc I e0
a
b
8 7 6 5 4 3 2 1 0
Vdi
VT
8
1
Fig. 5.4.9: a) The DC transfer function of the differential amplifier in Fig. 5.4.8a: the
input differential voltage is normalized to ZX and the output is normalized to !F Vc Me! .
b) With the emitter degeneration, as in Fig. 5.4.8b, the transfer function is more linear,
but at the expense of the gain (lower slope).
The system gain is represented by the slope of the plot, which, for Zdi !, is:
!F Vc Me!
!F Vc Me!
Zdo
Vc
!F
Zdi
ZX
<e Me!
<e
- 5.106 -
(5.4.55)
P.Stari, E.Margan
3 Me!
Ms
(5.4.56)
ZX
`3
3 Me!
(5.4.57)
linear
(5.4.58)
non-linear
We define the loading factor B as the ratio of the signal current to the bias current:
B
3
Me!
(5.4.59)
So we can express the incremental non-linearity (INL) factor R as the ratio of the
non-linear gain component to the linear one:
R aBb
B
ZX
" B ZX Me! Ve
(5.4.60)
Generally R can be (and usually is) a function of many variables, not just one.
In a similar way we can derive INL for the differential pair, where the
differential input voltage is:
Me! 3
Me! 3
(5.4.61)
#ZX
#ZX 3#
`3
`3
Me!
Me! a3# M # b
(5.4.62)
B#
#ZX
" B#
#ZX Me! Ve
(5.4.63)
This expression can be used to estimate the amount of error for a given signal
and bias current, which an error correction scheme attempts to suppress.
In the following pages we are going to show a collection of differential
amplifier circuits, employing some form of error correction, either feedback,
feedforward or both. We are also going to present their frequency and time domain
- 5.107 -
P.Stari, E.Margan
performance to compare how the bandwidth has been affected as a result of increased
circuit complexity (against a simple cascode amplifier).
For a fair comparison all circuits have been arranged to suit the test setup
shown in Fig. 5.4.10; the amplifiers were set for a gain of 2, using the same type of
transistors (BF959) and biased by a 10 mA current source. The input signal was
modeled by a 10 mA step driven current source, loaded by two 50 H ll 1 pF networks.
An equal network was used as the output load. Finally, all circuits were tuned for a
Bessel system response (MFED), using only capacitive emitter peaking (of course,
inductive peaking can be used in the final design). Note that this setup offers only a
relative indication of what can be achieved, not a final optimized design.
Vcc
R e1
50
i( )
10mA
R e1
50
Ce1
1pF
R e1
50
A=2
Ce1
1pF
Ce1
1pF
R e1
50
Ce1
1pF
Vcc
I e0 i o
I e0 + io
V bb R
b
Rb
Q3
Cb
Cb
Q1
Q4
Ce
Re
Re
Q2
2 I e0
V ee
Fig. 5.4.11: This simple differential cascode amplifier, employing no error correction, is
used in the test set up circuit of Fig. 5.4.10, representing the reference against which all
other amplifiers are compared. The emitter peaking and base impedance are adjusted for a
MFED response.
- 5.108 -
P.Stari, E.Margan
for a MFED response. Fig. 5.4.12 and 5.4.13 show the frequency domain and time
domain responses, respectively.
9
6
o
i
0
[dB]
3
[] [ns]
45 0.1
90 0.2
135 0.3
12
180 0.4
15
225 0.5
270 0.6
18
21
0.001
0.01
0.1
f [GHz]
1.0
10.0
[mV]
0.150
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0
t [ns]
2.5
3.0
3.5
4.0
The first circuit to be compared with the reference is shown in Fig. 5.4.14. The
circuit is owed to C.R. Battjes [Ref. 5.18] and is functionally a Darlington connection
(U" , U# ), improved by the addition of U$ . Used as the differential input stage of a
cascode amplifier, it enhances the input characteristics and increases both the output
current handling and the bandwidth. At a first glance it may seem that the diode
connected U$ (shorted collector and base) can not do much. However, it allows U" to
carry a current much larger than the U# base current, delivering it to the resistance Ve
and lowering the impedance seen by base of U# , thus extending the bandwidth. The
compound device has about twice the current gain of a single transistor.
- 5.109 -
P.Stari, E.Margan
I e0 i o
I in =
2 +1
1+
2+
1+
Q1
1+ 2
I out =
Q4
2+
1+
Q3
Q6
Ce
Q7
Re
Re
2( 1+ 1 )
Re
Q8
V bb
Q5
Q2
1+ 1
1+ 1
V bb
Q1
Q2
Q3
1+
I e0 + io
2 I e0
V ee
a)
b)
Fig. 5.4.14: a) Improved Darlington. b) Used as the input differential stage of the
cascode amplifier see the performance in the following figures.
9
o
[] [ns]
[dB]
3
6
9
60 0.2
90 0.3
12
30 0.1
120 0.4
15
150 0.5
18
180
21
0.001
0.01
0.1
f [GHz]
210
10.0
1.0
Fig. 5.4.15: Frequency domain performance of the differential cascode amplifier using
the circuit of Fig. 5.4.14b. The bandwidth is about 560 MHz. Note the input voltage
changing slope above 2 GHz.
0.200
[mV]
0.150
0.100
0.050
0.000
0
0.5
1.5
2
2.5
t [ns]
3.5
Fig. 5.4.16: Time domain performance of the differential cascode amplifier of Fig. 5.4.14b.
The undershoot has increased, but the rise time is less than 0.7 ns.
- 5.110 -
P.Stari, E.Margan
In Fig. 5.4.17 U" and U# form the differential amplifier, whose error current 3"
is sensed at the resistor V" and is available at the collector of U$ for summing with
the output current 3# (error feedforward) further in the following circuit.
V cc
I1 + i 1
i
Q3
I2 + i 2
Q1
i2
I 2 i2
Q2
R2
I 1 i1
Q4
I2
i1
R1
R3
I1
I1
I2
I1
V ee
Fig. 5.4.17: A simple differential amplifier with feedforward error correction. Accurate
matching of transistors is required only for DC error reduction, not for the feedforward
linearization. Here 3# is the differential current, whilst the error current, 3", sensed at V",
is available at the U$ collector to be added to the output current further in the circuit.
Two such circuits can form a differential amplifier, employing a double error
feedforward correction, as shown in Fig. 5.4.18. The error currents can now be
summed directly with output currents, without further processing.
However, the main problem with this linearization technique is that V" must
be relatively high for a suitable error sensing, so it can reduce the bandwidth
considerably. In part the bandwidth can be improved by adding precisely matched
capacitors in parallel to both V# and V$ (emitter peaking), but then the input
impedance can become negative and should be compensated accordingly. This
negative input impedance compensation is easily implemented at @i inputs, but, by
adding it to the inputs connected to V" , the error sensing will be affected, since a part
of 3" would flow into the compensating networks, reducing error correction at high
frequencies.
Io + i o
Io i o
V cc
R1
R2
R2
i1
R3
R3
V ee
Fig. 5.4.18: Two circuits from Fig. 5.4.17 can form a differential amplifier with a double
error feedforward directly summed with the output.
- 5.111 -
P.Stari, E.Margan
9
6
o
[] [ns]
[dB]
3
30 0.1
60 0.2
90 0.3
12
120 0.4
15
150 0.5
18
180
21
0.001
0.01
0.1
f [GHz]
1.0
210
10.0
Fig. 5.4.19: Frequency domain performance of the circuit from Fig. 5.4.18. The bandwidth
can be high (560 MHz), but for a suitable error sensing the required high value of V" would
compromise it. The plot of @i indicates the negative input impedance at high frequencies,
which would need additional compensation networks at both inputs and at V".
0.200
[mV]
0.150
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
t [ns]
Fig. 5.4.20: Time domain performance of the circuit in Fig. 5.4.18. Note the jump of @i in
the first 100 ps and a pronounced undershoot in ?@o .
- 5.112 -
P.Stari, E.Margan
I 1 + I 2 + io
V b78
Q7
Rb3
V b34
Q8
Q3
Ce2
Q5
Cb3
i
Re2
Q6
Re2
Ce1
Q1
Re1
Re1
2I2
Rb4
Q4
V b34
Cb4
Q2
2 I1
V ee
V ee
Fig. 5.4.21: The Cascomp amplifier employs indirect error sensing and error feedforward.
9
6
o
i
0
[dB]
3
[] [ns]
90 0.1
180 0.2
270 0.3
12
360 0.4
15
450 0.5
18
540 0.6
21
0.001
0.01
0.1
f [GHz]
1.0
10.0
0.7
[mV]
0.150
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0
2.5
t [ns]
3.0
3.5
4.0
Fig. 5.4.23: Time domain performance of the Cascomp amplifier. The rise time is
about 0.7 ns and the initial undershoot is very low.
- 5.113 -
P.Stari, E.Margan
Fig. 5.4.24 shows a similar circuit, but with a feedback error correction. The
error signal is taken at the same point as before, but its amplified version is applied to
the emitters of the input differential pair U",# . Unfortunately, in spite of its attractive
concept this configuration is not suitable for high frequencies, since capacitive emitter
peaking can not be used (a capacitance in parallel to V" would short the auxiliary
amplifier outputs, reducing the amount of error correction at high frequencies), thus
the bandwidth is about 180 MHz only. But when bandwidth is not the primary design
requirement this amplifier can be a valid choice. We shall not plot its performance.
I 1 + io
I2
I2
V b34
Q3
Q4
R2
Q5
i)
I1 i o
V cc
Q6
( i)
Q2
Q1
V b34
I1 + I2
R1
I1 + I2
V ee
Fig. 5.4.24: Differential cascode amplifier with indirect error sensing and
error feedback. Unfortunately this configuration tends to be rather slow.
I o io
Q1
Q3
I 1 i 1
1+
( I 1 i 1)
Q5
i1
1+
Q6
Q2
Q4
I1
I1
V ee
- 5.114 -
P.Stari, E.Margan
I 1 + I 2 io
Q5
V b56
I 1 + I 2 + io
V b34
Q6
Q4
Q3
Q7
Q1
Q8
Re2
Re2
Re1
Re1
Q2
Rb
Ra
2 I1
2 I2
V ee
Ra
Rb
V ee
Fig. 5.4.26: Another evolution of the Cascomp is realized by feedback derived error
sensing and feedforward error correction. The junction of Va and Vb is at the virtual
ground at which the error of the U",# pair is sensed and amplified by the auxiliary
amplifier U(,) . The error is subtracted from the output current at the emitters of U&,'.
9
6
o
[] [ns]
[dB]
3
45 0.1
90 0.2
135 0.3
12
180 0.4
15
225 0.5
18
270 0.6
21
0.001
0.01
0.1
f [GHz]
1.0
10.0
0.7
Fig. 5.4.27: Frequency domain performance of the Cascomp evolution amplifier. The
bandwidth is a little over 400 MHz.
0.200
[mV]
0.150
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0
t [ns]
2.5
3.0
3.5
4.0
- 5.115 -
P.Stari, E.Margan
V cc
I 1 + I 2 io
Q6
Q5
Q7
V bb
Q8
Re2
Re2
2 I2
I3
Q3
Ce
V ee
Q1
I 1 + I 2 + io
Re1
Re1
I4
Q4
V bb
Q2
2 I1
V ee
Fig. 5.4.29: This output impedance compensation, also patented by Pat Quinn, has
direct error sensing and direct feedforward error correction, performed by U&) .
9
6
o
[] [ns]
[dB]
3
90 0.2
135 0.3
12
180 0.4
15
225 0.5
18
270 0.6
21
0.001
0.01
0.1
f [GHz]
315 0.7
10.0
1.0
Fig. 5.4.30: Frequency domain performance of the amplifier in Fig. 5.4.29. The
bandwidth is about 400 MHz.
0.200
0.150
[mV]
45 0.1
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0 2.5
t [ns]
3.0
3.5
4.0
- 5.116 -
P.Stari, E.Margan
P.Stari, E.Margan
back down to the negative supply voltage, needed by an eventual subsequent stage,
requires a greater level shift than in a conventional cascode.
Finally, the Cascomp has a limiting ability to handle overdrive signals. The
emitters of U$,% do not see the whole signal during overdrive, thus the error
amplifier signal is clipped off at peaks, and as a result the main and the error amplifier
experience different thermal histories. Additional circuitry is needed to ensure correct
input signal clipping and acceptable thermal behavior.
All these limitations dictated a different approach in the M377 IC. The basic
wideband amplifier block is shown in Fig. 5.4.32 and an improved differential version
in Fig. 5.4.33. This is a feedback amplifier, which can be viewed (oversimplified) as a
compound transistor, with the U" base, the U$ emitter and the U$ collector
representing the base, emitter and collector of the compound device, respectively.
Compared with a single transistor, operating at the same current, such a compound
transistor has a greater gm and ", and also much better linearity.
V cc
V cc
io
Rc
io
Rc
Cc
Q3
i
Q1
Q3
o
Q2
I1
I3
V ee
Q1
RL
Q2
I1
I3
V ee
V ee
a)
V ee
b)
V cc
V cc
Rc
Rc
Cc
Cc
Rd
io
Rb
Q5
Q3
Q3
Q1
Q4
Q2
I1
V ee
io
Rd
Lc
RL
I3
RL
Q1
Q2
I1
V ee
V ee
c)
I3
RL
V ee
d)
Fig. 5.4.32: a) The M377 IC main amplifier block, basic scheme. b) Dominant pole
compensation. c) Inductive peaking. d) Inductance created by the U& emitter impedance.
- 5.118 -
P.Stari, E.Margan
V cc2
Zc
C
B
i
V cc2
Q1
I1
V ee
Re3
Re3
I1
Q4
I1
I3
V ee
Q3
Q2
Q4
Q2
io
Zc
Q1
Q3
V cc1
V cc1
io
Q6
Q5
Q8
I3
V cc1
V ee
Q7
Zc
a)
b)
V cc2
i o
Fig. 5.4.33: a) The basic wideband amplifier block of the M377 IC can be viewed as a
compound transistor, in which the U" base, the U$ emitter, and the U$ collector
represent the base, emitter and collector of the compound device. b) Two such blocks
form a differential amplifier. An improved design results if U$ is of a Darlington
configuration. To a high precision the output current is 3o @3 Ve$ . The impedance ^c
is the compensation shown in Fig. 5.4.32d.
- 5.119 -
P.Stari, E.Margan
conducts owing to M# and M% , allowing the feedback of the lower circuit half to remain
operative, preserving the delicate thermal balance.
io
V cc2
Zc
V cc1
i
Q1
Q3
V cc2
I2
Q7
0.5mA
Q2
D1
I1
I4
1mA
D2
4mA
V cc2
D3
Re3
V ee
V ee
4mA
I1
i
Q4
I 3 12mA
Q5
D5
V cc2
D4
I4
1mA
0.5mA
I2
V cc1
Re6
D6
Q8
V cc2
Q6
Zc
io
V cc2
Fig. 5.4.34: The M377 amplifier with the components for high speed overdrive recovery.
The frequency domain and time domain responses of the circuit in Fig. 5.4.34
are shown in Fig. 5.4.35 and 5.4.36, respectively. Note that the compensating
impedance ^c was adjusted to the needs of the transistors used for circuit simulation
(BF959, as in all previous circuits, thus allowing comparison), therefore the graphs do
not represent the true M377 performance capabilities.
Although it can be argued that the simulation has been performed using a
simplified basic version of the circuit, there are a few points to note, which are
nevertheless valid. First, there is the potential instability problem (owed to feedback),
indicated by the phase plot turning upward and the envelope delay going positive
above some 4 GHz. If proper care is not taken, especially compensating the parasitics
and strays in an IC environment, the step response might display some initial
waveform aberrations, or even ringing and oscillations.
Another point of special attention is the parasitic capacitance to the substrate
of the Schottky diodes. Being within the feedback loop, these capacitances can be
troublesome. Proper forward bias for low impedance is needed to move those
unwanted poles (transfer function zeros) well above the cutoff frequency. High bias
would result in high temperatures, which, in a densely packed IC such as this one, can
be problematic. Also, since noise increases with temperature, the bias can not be as
high as one would like. The 0.5 mA bias offers a good compromise.
- 5.120 -
P.Stari, E.Margan
As an advantage, judging by the constant slope of the input voltage plot, the
circuit input impedance is well behaved, thus the loading of a previous stage should
not be critical. Likewise, the active inductive peaking, realized by the base resistance
Vb and transformed as inductance at the U& emitter (as shown in Fig. 5.4.32d), offers
a simple way of adjusting the frequency compensation network.
9
6
o
[] [ns]
0
[dB]
3
45 0.1
90 0.2
135 0.3
12
180 0.4
15
225 0.5
18
270 0.6
21
0.001
0.01
0.1
f [GHz]
315 0.7
10.0
1.0
Fig. 5.4.35: Frequency domain performance of the amplifier in Fig. 5.4.34. The
bandwidth of the simulated circuit is about 500 MHz, but this is owed to the transistor
used (BF959) and the frequency compensation network adjusted in accordance,
therefore the graph is not representative of the actual M377 IC performance.
[mV]
0.200
0.150
0.100
0.050
0.000
0
0.5
1.0
1.5
2.0 2.5
t [ns]
3.0
3.5
4.0
Fig. 5.4.36: Time domain performance of the amplifier in Fig. 5.4.34. The comment in
the caption of Fig. 5.4.35 also holds here.
Now take a close look at the circuit in Fig. 5.4.34; in particular, the diode pairs
H#,$ and H&,' , the resistors Ve$,' and the current source M$ ; if another such block is
added in parallel (with different values of resistors Ve$,' ), and if the current sources
are switched on one at a time, a gain switching in steps can be achieved. The
bandwidth would change only slightly with switching. Fig. 5.4.37 shows such a circuit
with two gain values, but three or more can easily be added.
Another way of changing the vertical sensitivity is to use a fixed gain
amplifier and switch the attenuation at its output, as shown in Fig. 5.4.38. In this way
- 5.121 -
P.Stari, E.Margan
the bandwidth is preserved, but attenuation switching has its own weak points, such
as a reduced signal range and higher noise at higher attenuation.
As a point of principle, switching the amplifier gain is preferred to fixed gain
with switched attenuation. Although it alters the bandwidth, gain switching preserves
the signals dynamic range at all settings, whilst the system with fixed gain and an
attenuator will have a comparable dynamic range only with no attenuation; at any
other attenuation setting the dynamic range would be proportionally reduced. Also,
gain switching systems will preserve the same noise level, whilst the fixed gain
systems will have the lowest signal to noise ratio at maximum attenuation.
io
V cc2
Zc
Q1
Q3
V cc2
V cc1
D5
D1
Q2
D3
V ee
D7
2R
I1
I2
V ee
D4
V ee
Q4
Q5
2R
D8
D2
Vcc1
V cc2
V ee
D6
Q6
Zc
io
V cc2
Fig. 5.4.37: Gain switching in steps was achieved by adding one or more emitter current
sources (M" , M# , ) with appropriate resistor values and Schottky diode pairs and switching on
one current source at a time.
V cc
R
V b4
Q4
V b3
2R
V cc
R
o
Vb2
Q3
Vb2
Q2
i
2R
R
Q1
Re
Q5
V b3
Q7
Q6
V b4
Q8
Re
2 I e0
V ee
Fig. 5.4.38: With an V#V network the attenuation can be switched in steps by applying a
positive DC voltage Zb#,$,% to U#,' , U$,( , U%,) , one pair at a time; at each collector the load is
Vll#V #V$. With U#,' on the gain is E #V$Ve , with U$,( on E V$Ve , and with U%,8
on E V'Ve , effectively halved by each step. A similar circuit was used in the Tek 2465.
- 5.122 -
P.Stari, E.Margan
For small gain or attenuation changes of, say, 1:4, as commonly found in
oscilloscope amplifiers, these differences can be small. However, in M377, the gain
switching had a 50:1 range. With such a high gain range the frequency compensation
needed to be readjusted (for the highest gain no compensation was needed).
5.4.7 The Gilbert Multiplier
A similar problem of compensation readjustment as a function of gain is
encountered with a continuously variable gain. Although used only occasionally, a
continuously variable gain is a standard feature of almost all oscilloscopes and no
manufacturer dares to exclude it, even in digital instruments (although there it is done
in very small steps).
In older analog instruments a simple potentiometer was used. This worked
well up to some 20 MHz. For higher frequencies the pot size and the variable
impedance at the slider represented major difficulties, even if the required gain
change was within a relatively small range, ordinarily about 3:1. At Tektronix, an
ingenious wire pot was used, having a bifiliar winding to cancel the inductance, but
its parasitic capacitance, which also varied with the setting, was causing too much
cross-talk at higher frequencies. Finally, there was also the mechanical problem of
placing the pot at the correct point in the circuit and still being able to bring its axis
on the front panel, aligned with the main attenuator switch.
A much more elegant choice is to use some sort of electronic gain control, by
using either a voltage or a current controlled amplifier (VCA or ICA). Such an
amplifier modulates (ideally) only the signal amplitude, a process which can be
mathematically described as multiplication of a signal by a DC voltage; thus we often
refer to those amplifiers as multipliers or modulators. Of course, electronic gain
control has its own problems and great care is needed to make it linear enough and
fast enough, as well as not too noisy. But it solves the problem of mechanical pot
placement, since it now has to handle only a DC control signal, so the pot can be
placed at any convenient place. In digital systems, the pot is replaced by a digital to
analog converter (DAC; in lower speed instruments, a multiplying DAC can be used
to attenuate the signal directly, replacing the VCA altogether).
Oscilloscopes do not need to exploit the full modulation range, as RF
modulators normally do. In contrast to RF modulators, which are fourquadrant
devices (both the carrier and the modulation are AC signals), the gain control in
oscilloscopes needs to work in two quadrants only (AC signal and DC control); four
quadrants would allow simple gain inversion, but this is more accurately done by a
switch. Therefore the modulation cross-talk or the common mode rejection at HF is
not an issue. On the other hand, DC stability is important since it directly affects
measurement accuracy. Whilst RF modulators operate over a limited frequency range,
for oscilloscopes the wideband gain flatness at all gain settings is also very important.
The simple differential amplifier in Fig. 5.4.8a can perform the variable gain
control by varying the emitter current. If we assume that the modulation voltage is
@M ZM Zbe Zee , the modulation current is:
#Me
@M
Ve
- 5.123 -
(5.4.64)
P.Stari, E.Margan
By inserting this into the gain equation of the differential amplifier the multiplication
function results:
@o @bb
#V
V Me
V
@bb
@bb @M
# <e
ZX
# V e ZX
(5.4.65)
Unfortunately the bandwidth is also proportional to the emitter current and with the
usual values of V and stray capacitances, the dominant pole at low currents can be
very low. In addition the output common mode level also changes with current.
Almost all wideband multipliers are based on one of the variations of the basic
circuit, now known as the Gilbert multiplier, after its inventor Barrie Gilbert (see
[Ref. 5.565.62]). The circuit development can be followed from Fig. 5.4.39a by
noting that if the output is to be a linear function the input has to be nonlinear.
V cc
(1 + x ) Ie
V cc
(1 x ) Ie
(1 + x ) Ie
Q4
Q3
o
bb
Q1
(1 x ) Ie
bb
i
Q2
Q1
Q2
R
2 Ie
2 Ie
M
Re
Re
V ee
V ee
a)
b)
V cc2
(1 + x ) Ie2
V cc1
Q3
Q1
(1 x ) Ie2
Q4
bb
Q5
Q6
Q2
2 Ie2
2 Ie1
V ee
V ee
c)
- 5.124 -
P.Stari, E.Margan
(5.4.66)
and considering that the intrinsic current Ms "!"% A, then even for currents as low
as " nA we can say that Me Ms ; so we are not making a big mistake if we use:
Me Ms eZbeZX
(5.4.67)
Since the differential pair was made by the same IC process, we can expect that the
devices will be reasonably well matched, so Ms" Ms# and their temperature
dependence will also be well matched if the transistors are at the same temperature:
Me"
Ms" aZbe" Zbe# bZX
e
eaZbe" Zbe# bZX
Me#
Ms#
(5.4.68)
In order to achieve a linear output current the expected input voltage should follow the
logarithmic function:
@bb ?Zbe ZX ln
Me"
Me#
(5.4.69)
Me# Me! 3e
and
(5.4.70)
3e
Me!
(5.4.71)
and
(5.4.72)
and, returning to Eq. 5.4.69, we obtain the required nonlinear input function:
@bb ZX ln
Me! a" Bb
"B
ZX ln
Me! a" Bb
"B
(5.4.73)
- 5.125 -
P.Stari, E.Margan
0.25
0.20
I M = 4mA
0.15
0.10
c3
0.00
c4
[V]
[V]
c1 c2
0.05
I M = 0.4mA
0.05
0.10
0.15
I M = 4mA
0.20
I M = 0.4mA
0.25
0.5
0.4 0.3
0.2
0.1
0.0
0.1
0.2
0.3
0.4
0.5
s [V]
Fig. 5.4.40: DC transfer function of the Gilbert multiplier of Fig. 5.4.39, for #Me" % mA
and modulation current #Me# MM !%% mA.. The signal source (@s ) range is 0.5V.
1.00
0.80
0.60
c3 c4
[V]
I M = 4mA
0.40
0.20
I M = 0.4mA
0.00
0.001
0.01
0.1
f [GHz]
10
Fig. 5.4.41: The Gilbert multiplier bandwidth is almost constant over the 10:1 modulation
current range.
- 5.126 -
P.Stari, E.Margan
A (1 + x ) I e1
(1 x ) I e1
(1 + x ) I e1
1 : A
(1 x ) I e2
(1 + x ) I e2
(1 x ) I e2
(1 + x ) I e1
A : 1
1 : A
A : 1
2 I e2
a)
b)
V ee
Fig. 5.4.42: Another way of developing the Gilbert multiplier is by interconnecting two
current mirrors into a differential amplifier, whose input nonlinearity is compensated by the
nonlinearity of the two grounded transistors.
I ctrl
I ctrl
2(1 x ) I e2
2(1 + x ) I e2
4 Ie
V ee
Fig. 5.4.43: A four quadrant multiplier developed from the previous circuit. The gain is
controlled by current biasing the compensation transistors. Four quadrant operation is
obtained from the fact that the cross-coupled collectors cancel out if the two tail currents are
equal and the third differential amplifier allows to distribute the tail currents in a symmetrical
manner about the midbias value. Thus both the input and the control can be AC signals and
can also be mutually exchanged, not compromising the bandwidth or the linearity.
- 5.127 -
P.Stari, E.Margan
I ctrl
I ctrl
2(1 x ) I e
2(1 + x ) I e
4 Ie
V ee
Fig. 5.4.44: Two quadrant multiplication is sufficient for the oscilloscope continuously
variable gain control; however, the same differential symmetry of the four quadrant multiplier
has been retained for the M377 because of good thermal balance and DC stability.
It might be interesting to note that Barrie Gilbert published an article [Ref. 5.56] describing his
multiplier before Tektronix applied for a patent. Motorola quickly seized the opportunity and started
producing it (as the MC1495). Tektronix claimed the primarity and Motorola admitted it, but
nevertheless continued the production, since once published the circuit was public domain. Barries
misfortune gave the opportunity to many generations of electronics enthusiasts (including the authors
of this book) to play with this little jewel and use it in many interesting applications. Thanks, Barrie!
- 5.128 -
P.Stari, E.Margan
R sum of Part 5
In this part we have briefly analyzed some of the most important aspects of
system integration and system level performance optimization, with a special
emphasis on system bandwidth.
We have described the transient response optimization by a pole assignment
process called geometrical synthesis and showed how it can be applied using
inductive peaking. We have discussed the problems of input signal conditioning, the
linearization and error reduction and correction techniques, employing the feedback
and feedforward topologies at either the system level or at the local, subsystem level.
We have also revealed and compared some aspects of designing wideband amplifiers
using discrete components and modern IC technology.
On the other hand, we have said very little about other important topics in
wideband instrumentation design, such as adequate power supply decoupling,
grounding and shielding, signal and supply path impedance control by strip line and
microstrip transmission line techniques, noise analysis and low noise design, and the
parasitic impedances of passive components. But we believe that those subjects are
extensively covered in the literature, some of it also cited in the references, so we
have tried to concentrate on the bandwidth and transient performance issues.
We have also said nothing about high sampling rate analog to digital
conversion techniques, now already established as the essential ingredient of modern
instrumentation. While there are many books discussing AD conversion, most of
them are limited to descriptions of applying a particular AD converter, or, at most, to
compare the merits of one conversion method against others. Only a few of them
discuss ADC circuit design in detail, and even fewer the problems of achieving top
sampling rates for a given resolution, either by an equivalent time or a real time
sampling process, time interleaving of multiple converters, combining analog and
digital signal processing and other techniques, which today (first decade of the XXI
century) allow the best systems to reach sampling rates of up to 20 GSps (Giga
Samples per second) and bandwidths of up to 6 GHz.
Just like many other books, this one, too, ends just as it has become most
interesting (the reader might ask himself whether there is really nothing more to say
or whether the authors simply ran out of ideas? since most of the circuits presented
are not of our origin, and electronics certainly is an art of infinite variations, we the
authors can be, one hopes, spared the blame). As already said in the Foreword, the
most difficult thing when writing about an interesting subject is not what to include,
but what to leave out. Although we discuss the effects of signal sampling in Part 6
and a few aspects of efficient system design combining analog and digital technology
in Part 7, this book is about amplifier design, so we leave the fast ADC circuit design
discussion for another opportunity.
- 5.129 -
P.Stari, E.Margan
References:
[5.1]
[5.2]
[5.3]
P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969
[5.4]
[5.5]
[5.6]
[5.7]
[5.8]
A.D. Evans (Editor), Designing with Field Effect Transistors, Siliconix, McGraw-Hill, 1981
[5.9]
[5.10]
[5.11]
IC Applications Handbook,
Burr-Brown, 1994, <http://www.burr-brown.com/>, <http://www.ti.com/>
[5.12]
[5.13]
S. Franco, Design with Operational Amplifiers and Analog ICs, McGraw-Hill, 1988
[5.14]
[5.15]
[5.16]
[5.17]
[5.18]
Note : For US Patents go to <http://www.uspto.com/> and type the patent number in the Search pad.
Patent figures are in TIFF graphics format, so a TIFF Viewer software is recomended (links
for downloading and installation are provided within the USPTO web pages).
- 5.131 -
P.Stari, E.Margan
[5.19]
J.L. Addis, P.A. Quinn, Broadband DC Level Shift Circuit With Feedback,
US Patent 4 725 790, Feb. 16, 1988
[5.20]
[5.21]
[5.22]
[5.23]
[5.24]
[5.25]
[5.26]
[5.27]
[5.28]
[5.29]
[5.30]
[5.31]
R.W. Anderson, s-Parameter Techniques for Faster, More Accurate Network Designs,
Hewlett-Packard, Application Note AN-95-1
[5.32]
[5.33]
[5.34]
J. Bales, A Low Power, High Speed, Current Feedback OpAmp with a Novel Class AB High
Current Output Stage, IEEE Journal of Solid-State Circuits, Vol. 32, No. 9, Sept. 1997
[5.35]
[5.36]
[5.37]
T.T. Regan, Designing with a New Super Fast Dual Norton Amplifier,
National Semiconductor Application Note AN-278, Sept., 1981
[5.38]
- 5.132 -
P.Stari, E.Margan
[5.39]
[5.40]
[5.41]
[5.42]
[5.43]
[5.44]
[5.45]
[5.46]
[5.47]
M.J. Hawksford, Low Distortion Programmable Gain Cell Using Current Steering Cascode
Topology, Journal of the Audio Engineering Society, vol. 30, No. 11, pp. 795799, Nov., 1982
[5.48]
[5.49]
[5.50]
[5.51]
[5.52]
[5.53]
[5.54]
[5.55]
[5.56]
[5.57]
- 5.133 -
P.Stari, E.Margan
[5.58]
[5.59]
[5.60]
[5.61]
[5.62]
[5.63]
[5.64]
[5.65]
[5.66]
[5.67]
[5.68]
- 5.134 -
P. Stari, E. Margan:
Wideband Amplifiers
Part 6:
P. Stari, E. Margan
Computer Algorithms
-6.2-
P. Stari, E. Margan
Computer Algorithms
Contents:
6.0. Aim and motivation .......................................................................................................................... 6.5
6.1. LTIC System Description A Short Overview .............................................................................. 6.7
6.2. Algorithm Syntax And Terminology .............................................................................................. 6.11
6.3. Poles And Zeros ............................................................................................................................. 6.13
6.3.1. Butterworth Systems ..................................................................................................... 6.15
6.3.2. BesselThomson Systems ............................................................................................. 6.17
6.4. Complex Frequency Response .......................................................................................................
6.4.1. Frequency Dependent Response Magnitude .................................................................
6.4.2. Frequency Dependent Phase Shift ................................................................................
6.4.3. Frequency Dependent Envelope Delay .........................................................................
6.21
6.22
6.28
6.31
6.35
6.36
6.42
6.44
6.45
6.46
6.51
6.55
-6.3-
P. Stari, E. Margan
Computer Algorithms
List of Figures:
Fig. 6.3.1: 5th -order Butterworth poles in the complex plane ................................................................ 6.16
Fig. 6.3.2: BesselThomson poles of systems of 2nd - to 9th -order ........................................................ 6.20
Fig. 6.4.1: 5th -order Butterworth magnitude over the complex plane .................................................... 6.23
Fig. 6.4.2: 5th -order Butterworth complex response vs. imaginary frequency ....................................... 6.24
Fig. 6.4.3: 5th -order Butterworth Nyquist plot ....................................................................................... 6.25
Fig. 6.4.4: 5th -order Butterworth magnitude vs. frequency in linear scale ............................................ 6.26
Fig. 6.4.5: 5th -order Butterworth magnitude in loglog scale .................................................................. 6.27
Fig. 6.4.6: 5th -order Butterworth phase, modulo 21 .............................................................................. 6.28
Fig. 6.4.7: 5th -order Butterworth phase, unwrapped .............................................................................. 6.29
Fig. 6.4.8: 5th -order Butterworth envelope delay .................................................................................. 6.32
Fig. 6.5.1: 5th -order Butterworth impulse- and step-response ............................................................... 6.35
Fig. 6.5.2: Impulse response in time- and frequency-domain ................................................................ 6.37
Fig. 6.5.3: The negative frequency concept explained ........................................................................ 6.39
Fig. 6.5.4: Using window functions to improve calculation accuracy ................................................ 6.43
Fig. 6.5.5: 1st -order system impulse response calculation error ............................................................ 6.52
Fig. 6.5.6: 1st -order system step response calculation error .................................................................. 6.52
Fig. 6.5.7: 2nd -order system impulse response calculation error ........................................................... 6.53
Fig. 6.5.8: 2nd -order system step response calculation error ................................................................ 6.53
Fig. 6.5.9: 3rd -order system impulse response calculation error ........................................................... 6.54
Fig. 6.5.10: 3rd -order system step response calculation error ................................................................ 6.54
Fig. 6.5.11: Step-response of BesselThomson systems of order 2 to 9 ............................................... 6.55
Fig. 6.7.1: Pole layout of 5th -order Butterworth and BesselThomson systems .................................... 6.61
Fig. 6.7.2: Magnitude of 5th -order Butterworth and BesselThomson systems .................................... 6.62
Fig. 6.7.3: Step-response of 5th -order Butterworth and BesselThomson systems ............................... 6.62
List of routines:
BUTTAP Butterworth poles ............................................................................................................ 6.16
BESTAP BesselThomson poles ................................................................................................... 6.19
PATS
PHASE
GDLY
TRESP
ATDR
Transient response from poles and zeros, using residues ................................................ 6.59
-6.4-
P. Stari, E. Margan
Computer Algorithms
-6.5-
P. Stari, E. Margan
Computer Algorithms
" ,3 C 3 >
30
" +4 B4 >
(6.1.1)
40
where the coefficients +4 and ,3 are derived from the systems time constants, whilst
C3 and B4 are the 3th and 4th derivatives of the output and input signal, as required by
the systems order. From the theory of differential equations we know that the solution
of Eq. 6.1.1, given the initial conditions C!, C w !, Cww !, , C81 ! is of the form:
C> Ch > Cf >
(6.1.2)
Here Ch > is the solution of the homogeneous differential equation (in which B> and
all its derivatives are zero), whilst Cf > is the particular solution for B>, which
means that Cf > Cf eB>f. From circuit theory we know that Ch > represents the
natural (also free, impulse, transient, or relaxation from the initially energized
state) response and Cf > represents the forced (also particular, final, steady state)
response.
Knowing that Cf > is a description of the output signal in time very distant from
the initial disturbance, when the system has regained a new state of (static or dynamic)
balance, we can define the system transfer function J = from Cf > and B>:
J =
Cf >
B>
where
B> e=>
(6.1.3)
Such an input signal has been chosen merely because it still retains its
exponential form when differentiated, i.e.:
B8 > =8 e=>
(6.1.4)
(6.1.5)
-6.7-
(6.1.7)
P. Stari, E. Margan
Computer Algorithms
Returning to Eq. 6.1.3 we can now define the system transfer function as a
rational function of =:
J =
Cf >
E e=>
+7 =7 +7" =7" +" = +!
=>
e
B>
,8 =8 ,8" =8" ," = ,!
(6.1.8)
3!
5"
T8 = " +3 =3 $ = <5
(6.1.9)
The value of this product is zero whenever = assumes a value of a root <5 .
Therefore we can rewrite Eq. 6.1.8 as:
J =
= D" = D# = D7" = D7
= :" = :# = :8" = :8
(6.1.10)
Here the roots of the polynomial in the numerator are the systems zeros, D4 , and the
roots of the polynomial in the denominator are the systems poles, :3 .
We shall have this form in mind whenever a system is specified, because we
shall always start the design by specifying some optimum polezero pattern as the
design goal and then work towards the required systems time constants.
The systems time domain equivalent of J =, labeled 0 >, is the systems
impulse response:
Ch > 0 >
(6.1.11)
B>$ >
where $a>b is the Diracs function (the infinitesimal time limit of the unit area impulse).
The response to an arbitrary input signal may then be found by convolving the
input signal with the systems impulse response (for convolution see Part 1, Sec. 1.15;
see also the VCON routine in Part 7, Sec. 7.2).
It is owed to Oliver Heaviside (18501925, [Ref. 6.46.8]), who pioneered the
transform theory, that we solve differential equations through the use of the Laplace
transform1.
The transform is applied to the time variable > through a single time domain
integration, producing a new variable =, whose dimension is >" (frequency). In the
frequency domain the 8th -order differential equation is reduced to an 8th -order
polynomial, whilst the convolution is reduced to simple multiplication. Once solved
(using simple algebra), the result is transformed back to the time domain.
1
Apparently, Heaviside developed his operational calculus in the 1890s independently of Laplace.
Although useful and giving results in accordance with practice, his method was considered unorthodox
and suspicious for quite a while and only in the mid 1930s it was realized that the theoretical basis for
his work could be traced back to Laplace. Interestingly, he also develpoed the method of compensating
the dominantly capacitive telegraph lines by inductive peaking, amongst many other things.
-6.8-
P. Stari, E. Margan
Computer Algorithms
(6.1.12)
_
where _ef denotes the Laplace operator as defined by the integral (see Part 1, Sec.1.4).
Actually, the integration is usually made from > ! and not from _, in order
to preserve the responses causality (i.e., something happens only after closing the
switch). This limitation is caused by the term e=> which for > ! would not integrate
to a finite value unless 0 > ! for > !. Such restriction is readily accomplished if
we modulate the input signal by closing a switch at > !. Mathematically, this can be
expressed by multiplying 0 > by 2> the Heavisides unit step function. In our case
this is not necessary, since for calculation of the transient response we consider such
input signals which satisfy the convergence condition by definition. Also we shall
always assume that the system under investigation was powered up for a time long
enough to settle down, so we can safely say that all initial conditions are zero (or an
additive constant at worst).
Physically, by multiplying the time domain function by e=> in Eq. 6.1.12 we
have canceled the rotation of the phasor e=> at that particular frequency (=), allowing the
function to integrate to some finite value (see Part 1, Sec.1.2). At other frequencies the
phasors will continue to rotate, integrating eventually to zero. By doing so for all
frequencies we produce the frequency domain equivalent of 0 >. This same process is
going on in a sweeping filter spectrum analyzer; the only difference is that in our case
an infinitely narrow filter bandwidth is considered. Indeed, such bandwidth takes an
infinitely long energy build up time, thus the integration must also last infinitely long
and be performed in infinitely small steps.
The inverse transform process is defined as:
0 > _" eJ a=bf
"
#14
5 4_
J = e=> .=
(6.1.13)
5 4_
where 5 is an arbitrarily chosen real valued positive constant for which the inversion
solution exists (this restriction is required for functions which do not decay to zero in
some finite time and therefore the integral would not converge, e.g., the unit step).
Eq. 6.1.1 can now be written as:
C> _" J = _eB>f
(6.1.14)
Note that for transient response calculation, B> (the time domain input
function), is either the Dirac function (or $> the unity area impulse) or the
Heaviside function (or 2> the unity amplitude step). In these two cases
_eB>f \= is either " (the transform of the unity area impulse), or "= (the
transform of the unity amplitude step), as we have already seen in Part 1.
Eq. 6.1.14 has been used extensively in previous parts to calculate the transient
responses analytically. However, for calculation of the frequency response, we are
interested only in that part of the transformed function which is a function of a purely
-6.9-
P. Stari, E. Margan
Computer Algorithms
(6.1.15)
However, the Fourier transform of the unity amplitude step does not
converge, so we shall have to use an additional procedure to calculate the step
response.
It is possible to put Eq. 6.1.1 into numerical form [Ref. 6.20, 6.21]. Whilst there
are ways of using Eq. 6.1.14 in numerical form [Ref. 6.22, 6.23], we shall rather
concentrate on Eq. 6.1.15, since the Fast Fourier Transform algorithm (FFT, [Ref. 6.16
6.19]), which we are going to use, offers some very distinct advantages. In addition, we
shall develop an algorithm based on the residue theory (Part 1, Sec. 1.9); the details are
given in Sec. 6.6.
Another point to consider, known from modern filter theory, is that optimized
high order systems are difficult to realize in direct form, because the ratio of the
smallest to the largest time constant quickly falls below component tolerances as the
systems order is increased. Butterworth [Ref. 6.11] has shown that optimum system
performance is more easily met by a cascade of low order systems (several of second
and only one third, if 8 is odd) separated by amplifiers. As a bonus such structures will
satisfy the gainbandwidth product requirement more easily. So in practice we shall
rarely need to solve high order system equations, usually only at the system integration
level.
The formulae presented above will be used as the starting point in algorithm
development. We shall develop the algorithms for calculating the system poles for a
desired system order, the complex frequency response, the magnitude and phase
response, the group (envelope) time delay, the impulse response, the step response, and
the numerical convolution. Those algorithms can, of course, be written to solve only
our particular class of problems. It is wise, however, to write them to be as universally
applicable as possible, in spite of losing some algorithm efficiency, to suit eventual
future needs.
-6.10-
P. Stari, E. Margan
Computer Algorithms
vector
scalar
size length
submatrix
.*
./
.^
==
>
>=
&
<
~
<=
~=
Semicolon, logical end of command line. For matrices it indicates end of a row.
2 + 3j
-6.11-
P. Stari, E. Margan
Computer Algorithms
-6.13-
P. Stari, E. Margan
Computer Algorithms
In this text we shall only deal with the Butterworth [Ref. 6.11] and Bessel
Thomson [Ref. 6.12, 6.13] low pass systems for calculation of poles, since these are
required in wideband amplifier design. If needed, Chebyshev, inverse Chebyshev and
elliptic (Cauer) functions are provided in the Matlab Signal Processing Toolbox, as well
as the low pass to high pass, band pass and band stop transform algorithms. The
toolbox also contains many other useful algorithms, such as RESIDUE, ROOTS, etc.,
which will not be considered here (see [Ref. 6.1, 6.2, 6.19]).
In order to be able to compare the performance of different systems on a fair
basis we must specify some form of system standardization:
a) all systems will have the pole values normalized for an upper half power
angular frequency =h of 1 radian per second (equivalent to the cycle
frequency of 0h "#1 [Hz]). This leads to the use of a normalized
frequency vector, implying that whenever we write either 0 or =, we shall
actually mean 0 0h or ==h , respectively.
Please note that this can sometimes cause a bit of confusion, since 0 0h
is the same as ==h ; but = #10 , so we should keep an eye on the factor #1,
especially when denormalizing the poles to the actual system upper half power
frequency.
The frequency response is calculated as a function of =, since the poles
and zeros are mapped in terms of = 5 4=, where both the real and
imaginary part are measured in [rads], but we usually plot it as a function of 0
(in [Hz]). If the values of the poles are normalized we can use the same
normalized frequency vector to calculate and plot the frequency domain
functions. Therefore to plot the magnitude and phase responses vs. frequency
we shall not have to divide the frequency vector (of a length usually between
100 and 1000 elements) by #1.
Since Matlab will not accept the symbol = as a valid name for a variable
we shall replace it by w=2*pi*f in our routines.
b) all systems will have their DC gain (at = !) normalized to E0 "
(throughout this text we shall consider low pass systems only). Nevertheless, we
shall try to provide the correct gain treatment in the general case, in order to
broaden the applicability of our algorithms.
Of course, to extract the actual system component values, as well as to scale the
various frequency and time domain responses to comply with the desired upper
frequency 0h , the poles and zeros will have to be denormalized (multiplied) by #10h .
Also, each response will have to be scaled by the required gain factor.
I.e., for a simple current driven shunt VG system, the normalized pole,
="n "V"n G"n ", is first denormalized to the value of the desired bandwidth,
=" ="n #10h . From =" we get the new component values, V" G" "#10h . Finally,
we multiply V" by the gain factor, V EV" , to obtain the desired output voltage from
the available input current, that is @o V3i , and then reduce G" by the same amount,
G G" E. If G is a stray capacitance it cannot be reduced below the limit imposed by
the circuit topology. Then we must work backwards by first finding the V which would
give the desired bandwidth, and then determine the input current which will give the
required output voltage.
-6.14-
P. Stari, E. Margan
Computer Algorithms
"
" =# 8
(6.3.1)
J = J =
(6.3.2)
(6.3.3)
= ""#8
(6.3.4)
1 5 #1
1 5 #1
4 sin
#8
#8
(6.3.5)
5!
(6.3.6)
$ = =5
5"
where =5 are found from the expression of Eq. 6.3.5 in the exponential form:
=5 e41
"
#5"
8
#
for 5 ", #, $, , 8
(6.3.7)
and:
8
5! $ =5
(6.3.8)
5"
8
In the general (non-normalized) case, =h
5! .
"
#
-6.15-
J "
"
#
(6.3.9)
P. Stari, E. Margan
Computer Algorithms
Eq.6.3.7 and Eq.6.3.8 are implemented in the Matlab Signal Processing Toolbox
function called BUTTAP (an acronym for BUTTerworth Analog Prototype):
function [z,p,k] = buttap(n)
%BUTTAP
Butterworth analog low pass filter prototype.
%
[z,p,k] = buttap(n) returns the zeros, poles, and gain
%
for the n-th order normalized prototype Butterworth analog
%
low pass filter. The resulting filter has n poles on the
%
unit circle in the left half plane, and no zeros.
%
%
See also BUTTER, CHEB1AP, and CHEB2AP.
%
%
%
%
z
p
k
As an example see the complex plane layout of the poles of a 5th -order
Butterworth system in Fig. 6.3.1.
For a desired attenuation + "E at some chosen =+ =h we can calculate
the required system order:
8
and round it to the first higher integer.
log"! E# "
=+
# log"!
=h
(6.3.10)
{s }
s4
1
s2
{s}
s1
s3
1
2
2
s5
-6.16-
P. Stari, E. Margan
Computer Algorithms
(6.3.11)
=
=
"
sinh = cosh =
"
sinh =
cosh =
"
sinh =
(6.3.12)
The Taylor series for hyperbolic sine function has even powers of = and the
hyperbolic cosine has odd powers of =. When we divide these polynomials (using long
division) the poles of the resulting polynomial meet the stability criterion. If we express
this as a partial fraction expansion, truncated at the 8th fraction, an 8th -order Bessel
Thomson system results. This can be expressed as:
J =
-!
F8 =
(6.3.13)
F8 = " -5 =5
(6.3.14)
5!
(6.3.15)
#8 5x
#85 5x 8 5x
a 5 !, ", #, , 8 ", 8
(6.3.16)
The function, which will calculate the Bessel polynomial coefficients using
Eq. 6.3.16 will be called BESTAP (this stands for BESselThomson Analog Prototype,
but the name is also in good agreement with the best time domain response of this
system family). Within this function the system poles are extracted using the ROOTS
-6.17-
P. Stari, E. Margan
Computer Algorithms
function in Matlab. This works well up to 8 #%; for higher orders the ratio of -8 to -!
is so high that the computer numerical resolution (double precision or 16 significant
digits) is exceeded, but this is not a severe limitation because in most circuit
configurations the 1% component tolerances will limit system realizability to about
8 "$ (assuming a 6-stage system, for which the highest reactive component value
ratio is about 12:1). But if needed, we can always calculate the frequency response from
the polynomial expression, using the coefficients -5 directly in Eq. 6.1.8, instead of
using Eq. 6.1.10, as in the Matlab POLYVAL and FREQS routines.
BesselThomson system poles are found in the left half of the complex plane on
a family of ellipses, having a nearer focus at the complex plane origin and the other
focus on the positive part of the real axis (see Fig. 6.3.2).
The poles calculated in this way define a family of systems with equal envelope
delay (normalized to 1 s). This results in a progressively larger bandwidth and smaller
rise time for each higher 8 (see Fig. 6.5.11). In addition, two other normalizations of the
BesselThomson system are possible.
One is to make the asymptote of the magnitude roll off slope the same as it is
for the Butterworth system of equal order (this is useful for calculating transitional
Bessel to Butterworth systems, as we have seen in Part 4, Sec. 4.5.3). If =+ is to become
the half power cut off frequency of the new system:
J =+
-!
"
# =8+
#
"8
=+ - !
(6.3.17)
"8
In this case, with the roots of F8 = divided by -! , the envelope delay will be
"8
equal to -! , instead of ", and the system bandwidth will be smaller for each higher 8.
The other is to have equal bandwidth for any 8, possibly normalized to 1 rad/s,
as is the Butterworth family; in this way we would be able to compare different systems
on a fair basis. Unfortunately there is no simple way of matching the BesselThomson
system bandwidth to that of a Butterworth system of the same order. To achieve this we
have to recursively multiply the poles by a correction factor proportional to the
bandwidth ratio, until a satisfying approximation is reached (the values of poles
modified in such a way for systems of order 2 to 10 are shown in Part 4, Table 4.5.1).
The while loop at the end of the BESTAP routine has a tolerance of 0.0001 and it was
experimentally found to match in only 8 to 12 loop iterations, depending on 8; this
tolerance is satisfactory for most practical purposes, but the reader can easily change it
to suit his needs.
All these three normalization options (group delay, asymptote, and bandwidth)
are being provided for by the BESTAP routine by entering, besides the system order 8,
an additional input argument in the form of a single character string:
'n'
't'
'a'
for the same attenuation slope asymptote as Butterworth system of equal order.
As in the BUTTAP routine, three output variables are returned. But the number
of arguments returned by BESTAP can be either 3, 2, or just 1. If all three output
arguments are requested, the zeros are returned in z, the poles in p, and the non-
-6.18-
P. Stari, E. Margan
Computer Algorithms
normalized system gain is returned in the output variable k. Since there are no zeros in
this family of systems, an empty matrix is returned in z.
With just two output arguments, only z and p are returned.
When only one output argument is specified, instead of having an empty matrix
returned in z, which would not be very useful, we have decided to return the Bessel
polynomial coefficients -5 . Note that for the 8th -order system there are 8 b "
coefficients, from -8 to -! . The system gain normalization is achieved by dividing the
each coefficient by -! , that is, the last one in the vector c, i.e., c=c/c(nb1). The
coefficients are scaled as for the 't' option (equal envelope delay); other options are
then ignored. But, if necessary, we can always calculate the polynomial coefficients for
those cases from the poles, by invoking the POLY routine, i.e., c=poly(p).
function
%BESTAP
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
[z,p,k]=bestap(n,x)
BESsel-Thomson Analog Prototype.
Returns the zeros z, poles p and gain k of the n-th order
Bessel-Thomson system. This is an all-pole system, so an
empty matrix is returned in z. The poles are calculated for
a maximally flat envelope (group) delay.
Call : [z,p,k]=bestap(n,x);
where :
n is the system order
x is a single-character string, making the poles:
'n' - normalized to a cutoff of 1 rad/s (default);
'a' - normalized to have the same attenuation
asymptote as a Butterworth system of same n;
't' - scaled for a group-delay of 1s.
k is the non-normalized system DC gain.
p are the poles (length-n column vector)
z are the zeros (no zeros, empty matrix returned)
With only one output argument :
c=bestap(n);
the n+1 coefficients of the system polynomial are returned,
scaled as in the 't' option, ignoring other options.
% Author :
if nargin == 1
x='n';
end
z=[ ];
if n == 1
if nargout == 1
c=[1, 1];
else
p=-1;
k=1;
return
end
else
Free of copyright !
% no zeros
% first-order system coefficients
% first-order pole
% gain
% end execution of this routine
-6.19-
P. Stari, E. Margan
Computer Algorithms
if nargout == 1
z=c;
return
end
c=c/c(n+1);
if x == 'a' | x == 'A'
%
%
%
%
g=c(1) .^((n:-1:0)/n);
c=c./g;
| means logical OR
Normalize to Butterworth asymptote
c(1) is the coefficient at s^n
Normalize gain
end
p=roots(c);
if x == 'n' | x == 'N'
% Bandwidth normalization to 1 rad/s results in
% progressively greater envelope delay for increasing n
P=p; % copy the poles to P
% Reference (-3 dB point)
y3=1/sqrt(2);
y=abs(freqw(P,1));
% attenuation at 1 rad/s (see FREQW)
while abs( 1 - y3/y ) > 0.0001
P=P*(y3/y);
% Make iterative corrections
y=abs(freqw(P,1));
end
p=P; % copy P back to p
end
k=real(prod(-p));
10
8
6
{s }
2
4
6
8
10
6
2
4
{s }
10
12
14
16
18
Fig. 6.3.2: Complex plane pole map for unit group delay Bessel
Thomson systems of order 29 (with the first-order reference).
-6.20-
P. Stari, E. Margan
Computer Algorithms
[m,n]=size(s);
P=ones(m,n);
% A matrix of all ones, same dimension as s
nr=max(size(R));
% number of elements in R
for k=1:nr
if R(k) == 0
P=P.*s;
% Multiply, but prevent from dividing by 0.
else
P=P.*(s-R(k))/(-R(k));
end
end
function
% FREQW
%
%
%
%
%
%
F=freqw(z,p,w)
returns the complex frequency response F(jw) of the system
described by the zeros (vector z=[z1,z2,...,zm]) and the
poles (vector p=[p1,p2,...,pn]).
Call : F=freqw(z,p,w);
w is the frequency vector; can be real, imaginary or complex.
F=freqw(p,w) assumes a system with poles only.
FREQW uses PATS. See also FREQS and FREQZ.
% Author :
if nargin == 2
% nargin returns the number of input arguments
w=p; p=z; z=[ ];
% assume a system with poles only
end
for k=1:max(size(p))
if real(p(k)) >= 0
disp('WARNING : This is not a Hurwitz-type system!')
end
end
if ~any(imag(w))
w=sqrt(-1)*w;
% if w is real, assume it to be imaginary
end
if isempty(z)
F=1 ./pats( p, w ) ;
else
F=pats( z, w )./pats( p, w ) ;
end
-6.21-
P. Stari, E. Margan
Computer Algorithms
dJ = eJ = Q a=b
(6.4.1)
Assuming a sinusoidal input signal, the magnitude represents the output to input
ratio of the peak signal value at that particular frequency. In practice, when we talk
about the systems frequency response, we usually mean the frequency dependent
magnitude, Q =. The magnitude contains no phase information.
We can calculate the magnitude by any of the following Matlab basic functions:
M=sqrt( (real(F)).^2 + (imag(F)).^2 );
M=sqrt( F .* conj(F) );
M=sqrt( F .* ( F' ) );
M=abs(F);
%
%
%
%
or :
or :
or :
abs --> absolute value
We shall use the ABS command, not just because it is easy to type in, but
because it executes much faster when there is a large amount of data to process.
In order to acquire a better understanding of what we are doing, let us write an
example for a 5th -order Butterworth system. In the Matlab command window we write:
[z,p]=buttap(5);
If we now type:
z
answer:
[]
answer:
-0.3090
-0.3090
-0.8090
-0.8090
-1.0000
+
+
+
0.9511i
0.9511i
0.5878i
0.5878i
0.0000i
Since there are no zeros an empty matrix (shown by square brackets) is returned
in z. A 5-element column vector with complex conjugate pole values is returned in p.
Let us plot these poles in the complex plane using Cartesian coordinates:
plot( real(p), imag(p), '*' ), axis([-2,2,-2,2]);
% see the result in Fig.6.3.1.
and the result would look as in Fig. 6.3.1 (for clarity, the distance from the origin and
the unit circle are also shown there, both needing extra plot operations, not written in
the example above). From now on we shall not write the ENTER character explicitly.
-6.22-
P. Stari, E. Margan
Computer Algorithms
%
%
%
%
%
Fig. 6.4.1 has been created using the Matlab WATERFALL function and shows
the 3D magnitude of the 5th -order Butterworth system over a limited = domain in the
complex plane. The = domain here is the same as the left half of Fig. 6.3.1. Over the
poles, the magnitude would extend to infinity, so we have had to limit the height of the
plot in order to show the low level features in more detail.
12
M(s) = |F(s)|
2.0
1.5
1.0
1.0
= {s}
0.5
0 2.0
1.0
2.0
| F( j )|
j = j{s}
Fig. 6.4.1: The 5th -order Butterworth system magnitude, plotted over the same =-domain
as the shaded left half of Fig. 6.3.1. The surface represents J =, but limited in height in
order to reveal the low level details. Its shape above the 4= axis is J 4= Q=.
-6.23-
P. Stari, E. Margan
Computer Algorithms
Now, we have intentionally limited the = domain to just the left half of the
complex plane (where the real part is either zero or negative). This highlights the shape
of the plot along the imaginary axis, which is guess what ? Q =.
Looking at those lines parallel to the imaginary axis we can see what would
happen if the poles were moved closer to that axis: the magnitude would exhibit a
progressively pronounced peak. Such is the consequence of lowering the real part of the
poles. Since the negative real part is associated with energy dissipative (resistive)
components, it is clear that its role is to suppress resonance. But when we design an
oscillator we need to compensate any energy lost in the parasitic resistances of the
reactive components by an active regeneration (negative resistance or a positive real
part) in order to set the system poles (usually just one pair for oscillators) exactly on the
imaginary axis.
What is interesting to note is the mirror like symmetry about the real axis, owed
to the complex conjugate nature of the Laplace space. Here we see at work the concept
of negative frequency, which will be discussed latter in Sec. 6.5, dealing with the
Fourier transform inversion. This symmetry property will allow us to greatly improve
the inverse transform algorithm efficiency.
It is also instructive to see the complex frequency response J 4= in 3D:
w=(-3:0.01:3);
% 601 frequencies, -3 to 3, in 0.01 increment
F=freqw(z,p,w);
% 601 points of complex frequency response
plot3(w,real(F),imag(F))% 3D plot of the Im and Re part of F(jw)
view(65,15);
% view angle, azimuth 65deg., elevation 15deg.
% see the result in Fig.6.4.2.
1.5
{F ( j ) }
1.0
=0
F (0) = 1
0.5
{ F ( j ) }
0.5
1.0
1.5
3
3 1.5
0
1.0 0.5
0.5
1.0 1.5
Fig. 6.4.2: The complex 3D plot of J 4=. The response phasor rotates clockwise, going
from negative to positive frequency. The distance from the frequency axis is the magnitude.
The circle on the real axis marks the DC response point. The Nyquist plot (see Fig. 6.4.3)
usually shows only the = ! part, viewing in the 4= direction. The three projections are
plotted to help those readers who do not have access to Matlab to visualize the shape.
-6.24-
P. Stari, E. Margan
Computer Algorithms
Fig. 6.4.2, which has been created using the Matlab PLOT3 function, shows
J 4= with the phase angle twisting about the 4= axis and the magnitude as the
distance from the 4= axis. The circle marker denotes the point where J 4= crosses the
real axis at zero frequency the DC system gain normalized to 1.
Whilst the Fig. 6.4.1 waterfall plot shape was relatively easy to interpret and
feel, the 3D curve shape is somewhat less clear. In Matlab one can use the
view(azimuth,elevation) command to see the graph from different viewing angles.
In Fig. 6.4.2, view(65,15) was used. In more recent versions of Matlab the user can
even select the viewing point by the mouse. To help the imagination of readers
without access to Matlab we have also plotted the three shadows.
Regarding the symmetry, J 4= is not a mirror image of J 4= unlike
Q = because the phasor preserves its sense of rotation (clockwise, negative by
definition for any system with poles on the left) throughout the 4= axis. But if folded
about the real axis the shape would match.
As a result of such symmetry J 4= can be plotted using only the = ! part of
the axis, without any loss of information. The Nyquist plot [Ref. 6.9] shows both the
magnitude and the phase angle on the same graph:
w=(0:0.01:3);
F=freqw(z,p,w);
axis('square');
plot(real(F),imag(F))
%
%
%
%
The result should look like Fig. 6.4.3. The view is as if we look at the Fig. 6.4.2
in the opposite direction of the 4= axis (from +_ towards the origin).
1.0
{F ( j ) }
0.5
=1
=
{ F ( j ) }
)=
(
|F
(j
( ) = arctan
{F ( j ) }
{ F ( j ) }
as
es
0.5
=0
nc
i
1.0
1.0
0.5
0.5
re
1.0
Fig. 6.4.3: The Nyquist plot of the 5th -order Butterworth system frequency response. The
frequency axis is reduced to a single point projection at the origin and is parametrically
incremented with the phase angle, from the DC point on the real axis, to the half power
bandwidth point at [-0.5, 0.5*j] and to infinity at the origin.
-6.25-
P. Stari, E. Margan
Computer Algorithms
M=abs(F);
plot(w,M)
This should look like Fig. 6.4.4. The special point on the graph is the magnitude
at the unit frequency its value is "# , or 0.707, and since power is proportional to
the magnitude squared this is the systems half power cut off frequency.
1.2
1.0
M ( )
0.8
0.707
0.6
0.4
0.2
0.5
1.0
1.5
2.0
2.5
3.0
Fig. 6.4.4: The magnitude vs. frequency plot in a linear scale. The
characteristic point is the half power bandwidth.
It has also become a standard practice to enhance the stop band detail by using
either the log Q vs. log = or the semilog dB(Q ) vs. log = plot scale (Fig. 6.4.5):
w=logspace(-1,1,301);
%
%
F=freqw(z,p,w);
%
M=abs(F);
%
semilogx(w,20*log10(M))%
%
-6.26-
P. Stari, E. Margan
Computer Algorithms
By using the loglog scale or a linear dB vs. log frequency scale we can quickly
estimate the system order, since the slope is simply (for all pole systems) 8 times the
first-order system slope (820 dB per frequency decade).
10 0
0
-3 dB
M ( )
[dB]
M ( )
10 1
20
Sl o
:
pe
10 2
10
40
0d
B/
10
60
10 3
10 4
80
100
0.1
1.0
10.0
10 5
Fig. 6.4.5: Bode plot of the 5th -order Butterworth system magnitude, as in Fig. 6.4.4, but
in a linear dB vs. log frequency scale. In such a scale, all-pole systems have an
asymptotically linear attenuation slope, proportional to the system order (a factor of 108
or 820 dB10=). The marked 3dB reference point is the same half power cut off
frequency point as in Fig. 6.4.4.
-6.27-
P. Stari, E. Margan
Computer Algorithms
eeJ a=bf
d eJ a=bf
(6.4.2)
Note that the Matlab arctangent function is called atan. However, Matlab also
has a built in command named ANGLE, using the same Eq. 6.4.2, so:
phi=angle(F);
semilogx(w,phi);
3
( )
[rad]
2
+
0
1
0
3
4
0.1
1.0
10.0
Fig. 6.4.6: The phase angle vs. frequency plot of the 5th -order Butterworth system.
The circularity of trigonometric functions, defined within the range 1 radians, is
the cause of the discontinuous phase vs. frequency relationship.
-6.28-
P. Stari, E. Margan
Computer Algorithms
function q=ephd(phi)
% EPHD
Eliminate PHase Discontinuities.
%
Outperforms UNWRAP and ADDTWOPI for systems with zeros.
%
Use :
q=ephd(phi);
%
where :
%
phi --> input phase vector in radians ( range: -pi>=phi>=pi );
%
q
--> output phase vector, "unwrapped";
%
If phi is a matrix, unwrapping is performed down each column.
% Author :
[r,c]=size(phi);
if min(r,c) == 1
phi=phi(:);
% column-wise orientation
c=1;
end
q=diff(phi);
% differentiate to detect discontinuities
% compensate for one element lost in diff and round the steps:
q=[zeros(1:c); pi*round(q/pi)];
q=cumsum(q);
% integrate back by cumulatively summing
q=phi-q;
% subtract the correcting values
if r == 1
q=q.';
% restore orientation
end
The trick used in the EPHD routine is to first differentiate the phase, in order
to find where the discontinuities are and determine how large they are, then normalize
them by dividing by 1, round this to integers, multiply back by 1, integrate back to
obtain the corrections, and subtract the corrections from the original phase vector.
Following our 5th -order Butterworth example, we can now write:
alpha=ephd(phi);
semilogx(w,180*alpha/pi)
% unwrapped ;
% show alpha in degrees vs. log-scaled w;
90
( )
[]
180
270
360
450
0.1
1.0
10.0
Fig. 6.4.(: Bode plot of unwrapped phase, in a linear degree vs. log frequency scale.
-6.29-
P. Stari, E. Margan
Computer Algorithms
Plotting the phase in linear degrees vs. log scaled frequency reveals an
interesting fact: the system exhibits a 90 phase shift for each pole, 450 total phase
shift for the 5th -order system. Also, the phase shift at the cut off frequency =h is exactly
one half of the total phase shift (at = =h ).
Another important fact is that stable systems (those with poles on the left half of
the complex plane) will always exhibit a negative phase shift, whatever the system
configuration (low pass or high pass, inverting or non-inverting). If you ever see a
phase graph with a positive slope, first inspect what is the system gain in that frequency
region. If it is 0.1 or higher that is a cause for major concern (that is, if your intention
was not to build an oscillator!).
-6.30-
P. Stari, E. Margan
Computer Algorithms
.:=
.=
(6.4.3)
(note: : must be in radians!). Now it becomes evident why we have had to unwrap the
circular phase function: each #1 discontinuity would, when differentiated, produce a
very high, sharp spike in the envelope delay.
Numerical differentiation can be performed by simply taking the difference of
each pair of adjacent elements for both the phase and the frequency vector:
dphi=phi(2:1:300)-phi(1:1:299);
dw=w(2:1:300)-w(1:1:299);
But Matlab has a built in command called DIFF, so let us use it:
tau=diff(phi)./(diff(w));
Note that the values in variable tau are negative, reflecting the fact that the
system output is delayed in time. Since we call this response a delay by definition, we
could use the absolute value. However, we prefer to keep the negative sign, because it
also reflects the sense of the phase rotation (see Fig. 6.4.3 and 6.4.7). An upward
rotating phase (or a counter-clockwise rotation in the Bode plot of the complex
frequency response) would imply a positive time delay or output before input and,
consequently, an unstable or oscillatory system.
-6.31-
P. Stari, E. Margan
Computer Algorithms
2
e
[s]
3
5
6
0.1
1.0
10.0
Fig. 6.4.): The envelope (group) delay vs. frequency of the 5th -order Butterworth
system. The delay is the largest for frequencies where the phase has the greatest slope.
So far we have derived the phase and group delay functions from the complex
response to imaginary frequency. There are times, however, when we would like to
save either processing time or memory requirement (as in embedded instrumentation
applications). It is then advantageous to calculate the phase or the group delay directly
from the system poles (and zeros, if any) and the frequency vector.
The phase influence of a single pole :5 can be calculated as:
:5 = :5 arctan
= e:5
d :5
(6.4.4.)
The influence of a zero is calculated in the same way, but with a negative sign.
The total system phase shift is equal to the sum of all particular phase shifts of poles
and zeros:
8
5"
3"
:= ":5 = :5 ":3 = D3
(6.4.5.)
But owing to the inherent complex conjugate symmetry of poles and zeros, only
half of them need to be calculated and the result is then doubled. If the system order is
odd the real pole is summed just once, the same is true for any real zero. This, of
course, requires some sorting procedure of the system poles and zeros, but sorting is
performed much quicker than multiplication with =, which is usually a lengthy vector.
If we are interested in getting data for a single frequency, or just two or three
characteristic points, then it might be faster to skip sorting and calculate with all poles
and zeros. In Matlab poles and zeros are already returned sorted. See the PHASE
routine, in which Eq. 6.4.4 and 6.4.5 were implemented.
-6.32-
P. Stari, E. Margan
Computer Algorithms
Note also that with the PHASE routine we obtain the unwrapped phase
directly and we do not have to recourse to the EPHD routine.
function phi=phase(z,p,w)
% PHASE returns the phase angle of the system specified by the zeros
%
z and poles p for the frequencies in vector w :
%
%
Call :
phi=phase(z,p,w);
%
%
Instead of using angle(freqw(z,p,w)) which returns the phase
%
in the range +/-pi, this routine returns the "unwrapped" result.
%
See also FREQW, ANGLE, EPHD and GDLY.
% Author: Erik Margan, 890327, Last rev.: 980925, Free of copyright!
if nargin == 2
w = p ;
p = z ;
z = [];
% A system with poles only.
end
if any( real( p ) > 0 )
disp('WARNING : This is not a Hurwitz-type system !' )
end
n = max( size( p ) ) ;
m = max( size( z ) ) ;
% find w orientation to return the result in the same form.
[ r, c ] = size( w ) ;
if c == 1
w = w(:).' ;
% make it a row vector.
end
% calculate phase angle for each pole and zero and sum it columnwise.
phi(1,:) = atan( ( w - imag( p(1) ) ) / real( p(1) ) ) ;
for k = 2 : n
phi(2,:) = atan( ( w - imag( p(k) ) ) / real( p(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
if m > 0
for k = 1 : m
phi(2,:) = atan( ( imag( z(k) ) - w ) / real( z(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
end
phi( 2, : ) = [] ;
% result is in phi(1,:)
if c == 1
phi = phi(:) ;
% restore the form same as w.
end
A similar procedure can be applied to the group delay. The influence of a single
pole :5 is calculated as:
d :5
75 =, :5
d:5 e:5 =
(6.4.6.)
As for the phase, the total system group delay is a sum of all delays for each
pole and zero:
8
5"
3"
7d = "75 =, :5 "73 =, D3
-6.33-
(6.4.7.)
P. Stari, E. Margan
Computer Algorithms
Again, owing to the complex conjugate symmetry, only half of the complex
poles and zeros need to be taken into account and the result doubled, and any delay of
an eventual real pole or zero is then added to it. The GDLY (Group DeLaY) routine
implements Eq. 6.4.6 and 6.4.7.
function tau=gdly(z,p,w)
% GDLY returns the group (envelope) time delay for a system defined
%
by zeros z and poles z, at the chosen frequencies w.
%
%
Call :
tau=gdly(Z,P,w);
%
%
Although the group delay is defined as a positive time lag,
%
by which the system response lags the input, this routine
%
returns a negative value, since this reflects the sense of
%
phase rotation with frequency.
%
%
See also FREQW, PATS, ABS, ANGLE, PHASE.
% Author: Erik Margan, 890414, Last rev.: 980925, Free of copyright!
if nargin == 2
w=p;
p=z;
z=[]; % system has poles only.
end
if any( real( p ) > 0 )
disp( 'WARNING : This is not a Hurwitz type system !' )
end
n=max(size(p));
m=max(size(z));
[r,c]=size(w);
if c == 1
w=w(:).' ; % make it a row vector.
end
tau(1,:) = real(p(1)) ./(real(p(1))^2 + (w-imag(p(1))).^2);
for k = 2 : n
tau(2,:) = real(p(k)) ./(real(p(k))^2 + (w-imag(p(k))).^2);
tau(1,:) = sum( tau ) ;
end
if m > 0
for k = 1 : m
tau(2,:)=-real(Z(k)) ./(real(Z(k))^2 + (w-imag(Z(k))).^2);
tau(1,:) = sum( tau ) ;
end
end
tau(2,:) = [] ;
if c == 1
tau = tau(:) ;
end
-6.34-
P. Stari, E. Margan
Computer Algorithms
10
t
12
14
16
18
20
Fig. 6.5.1: The impulse and step response of the 5th -order Butterworth system. The
impulse amplitude has been normalized to represent the response to an ideal,
infinitely narrow, infinite amplitude input impulse. The impulse response reaches the
peak value at the time equal to the envelope delay value at DC; this delay is also the
half amplitude delay of the step response. The step response first crosses the final
value at the time equal to the envelope delay maximum. Also the step response peak
value is reached when the impulse response crosses the zero level for the first time.
If the impulse response is normalized to have the area (the sum of all samples) equal
to the system DC gain, the step response would be simply a time integral of it.
-6.35-
P. Stari, E. Margan
Computer Algorithms
(6.5.1)
Since the complex plane variable = is composed of two independent parts (real
and imaginary), then J = may be treated as a function of two variables, 5 and =. This
can be most easily understood by looking at Fig. 6.4.1, in which the complex frequency
response (magnitude) of a 5-pole Butterworth function is plotted as a 3D function over
the Laplace plane.
In that particular case we had:
J =
=" =# =$ =% =&
= =" = =# = =$ = =% = =&
(6.5.2)
where ="& have the same values as in the example at the beginning of Sec. 6.4.1.
When the value of = in Eq. 6.5.2 becomes close to the value of one of the poles,
=i , the magnitude kJ =k then increases until becoming infinitely large for = =i .
Let us now introduce a new variable : such that:
:=
5!
or:
: 4=
(6.5.3)
This has the effect of slicing the J a=b surface along the imaginary axis, as we
did in Fig. 6.4.1, revealing the curve on the surface along the cut, which is kJ 4=k, or
in words: the magnitude Q = of the complex frequency response. As we have
indicated in Fig. 6.4.5, we usually show it in a loglog scaled plot. However, for transient
response calculation a linear frequency scale is appropriate (as in Fig. 6.4.2), since we
need the result of the inverse transform in linear time scale increments.
Now that we have established the connection between the Laplace transformed
transfer function and its frequency response we have another point to consider:
conventionally, the Fourier transform is used to calculate waveform spectra, so we need
to establish the relationship between a frequency response and a spectrum. Also we
must explore the effect of taking discrete values sampling of the time domain and
frequency domain functions, and see to what extent we approximate our results by
taking finite length vectors of finite density sampled data. Those readers who would
like to embed the inverse transform in a microprocessor controlled instrument will have
to pay attention to amplitude quantization (finite word length) as well, but in Matlab
this is not an issue.
We have examined the Dirac function $> and its spectrum in Part 1, Sec. 1.6.6.
Note that the spectral components are separated by ?= #1X , where X is the
impulse repetition period. If we allow X p _ then ?= p !. Under these conditions we
-6.36-
P. Stari, E. Margan
Computer Algorithms
can hardly speak of discrete spectral components because the spectrum has become
very dense; we rather speak of spectral density. Also, instead of individual
components magnitude we speak of spectral envelope which for $> is essentially
flat.
However, if we do not have an infinitely dense spectrum, then ?= is small but
not !, and this merely means that the impulse repeats after a finite period X #1?=
(this is the mathematical equivalent of testing a system by an impulse of a duration
much shorter than the smallest system time constant and of a repetition period much
larger than the largest system time constant).
Now let us take such an impulse and present it to a system having a selective
frequency response. Fig. 6.5.2 shows the results both in the time domain and the
frequency domain (magnitude). The time domain response is obviously the system
impulse response, and its equivalent in the frequency domain is a spectrum, whose
density is equal to the input spectral density, but with the spectral envelope shaped by
the system frequency response. The conclusion is that we only have to sample the
frequency response at some finite number of frequencies and perform a discrete Fourier
transform inversion to obtain the impulse response.
150
128
20
Input Impulse
100
Impulse Response
10
50
0
128 Samples
50
30
1.5
0
30
60
Time / Sampling interval
10
30
90
1.5
Input Spectrum
0
30
60
Time / Sampling interval
90
Response Spectrum
64 Frequency components
1.0
1.0
0.5
0.5
20
40
60
Frequency Sampling interval
20
40
60
Frequency Sampling interval
Fig. 6.5.2: Time domain and frequency domain representation of a 5-pole Butterworth system
impulse response. The spectral envelope (only the magnitude is shown here) of the output is
shaped by the system frequency response, whilst the spectral density remains unchanged. From
this fact we conclude that the time domain response can be found from a system frequency
response using inverse Fourier transform. The horizontal scale is the number of samples (128 in
the time domain and 64 in the frequency domain see the text for the explanation).
-6.37-
P. Stari, E. Margan
Computer Algorithms
If we know the magnitude and phase response of a system at some finite number
of equally spaced frequency points, then each point represents:
J3 Q3 cos=3 > :3
(6.5.4)
0 >5
Q3 cos=3 >5 :3
"
(6.5.5)
3=min
Eq. 6.5.5 is the discrete Fourier transform, with the exponential part
expressed in trigonometric form. However, if we were to plot the response calculated
after Eq. 6.5.5, we could see that the time axis is reversed, and from the theory of
Fourier transform properties (symmetry property, [Ref. 6.14, 6.15, 6.18]), we know that
the application of two successive Fourier transforms returns the original function but
with the sign of the independent variable reversed:
Y Y e0 >f Y eJ 4=f 0 >
(6.5.6)
or more generally:
0 >
Y
p
o
Y "
J 4=
Y
p
o
Y "
0 >
Y
p
o
Y "
J 4=
Y
p
o
Y "
0 >
(6.5.7)
The main drawback in using Eq. 6.5.5 is the high total number of operations,
because there are three input data vectors of equal length (=, M, :) and each contributes
to every time point result. It seems that greater efficiency might be obtained by using
the input frequency response data in the complex form, with the frequency vector
represented by the index of the J 4= vector.
Now J 4= in its complex form is a two sided spectrum, as was shown in
Fig. 6.4.3, and we are often faced with only a single sided spectrum. It can be shown
that a real valued 0 > will always have J 4= symmetrical about the real axis 5 . Thus:
JR 4= JT* 4=
(6.5.8)
-6.38-
P. Stari, E. Margan
Computer Algorithms
d
dt
A
2j
a = A sin t
t=0
A
2
t1 =
= t
A
2j
A
j
a)
t = 2
A
2
b)
Fig. 6.5.3: As in Part 1, Fig. 1.1.1, but from a slightly different perspective: a) the real signal
instantaneous amplitude a> E sin :, where : =>; b) the real part of the instantaneous signal
p
p
phasor, d0A 0a E sin :, can be decomposed into two half amplitude, oppositely rotating,
complex conjugate phasors, aE#4b sin : aE#4b sin:. The second term has rotated by
: => and, since > is obviously positive (see the a) graph), the negative sign is attributed to =;
thus, clockwise rotation is interpreted as a negative frequency.
-6.39-
P. Stari, E. Margan
Computer Algorithms
(6.5.9)
(6.5.10)
hence using Eq. 6.5.9 and Eq. 6.5.10 and taking into account the cancellation of
imaginary parts, we obtain:
0 > 0T > 0T* > # d0T >
(6.5.11)
0 > # d Y JT* 4=
(6.5.12)
Note that the second (outer) complex conjugate is here only to satisfy the
mathematical consistency in the actual algorithm it can be safely omitted, since only
the real part is required.
As the operator Y ef in Eq. 6.5.12 implies integration we must use the discrete
Fourier transform (DFT) for computation. The DFT can be defined by decomposing
the Fourier transform integral into a finite sum of R elements:
J 5
#15
" R "
" 0 3 e4 R
R 3!
(6.5.13)
-6.40-
P. Stari, E. Margan
Computer Algorithms
-6.41-
P. Stari, E. Margan
Computer Algorithms
6.5.2 Windowing
For calculating the transient response a further reduction in the number of
operations is possible through the use of windowing. Windowing means multiplying
the system response by a suitable window function. We shall use the windowing in the
frequency domain, the reason for this we have already discussed when we were
considering to what extent DFT is an approximation. Like many others before us, in
particular the authors of various window functions, we, too, have found out that the
accuracy improves if the influence of higher frequencies is reduced.
Since the frequency response of a high order system (third-order or greater) falls
off quickly above the cut off frequency, we can take just R #&' frequency samples,
and after inverse FFT we still obtain a time domain response with an accuracy equal to
or better than the vertical resolution of the VGA type of graphics (1/400 or 0.25%). And
as the sample density (number of points) of the transient wavefront increases with the
number of stop band frequency samples, it is clear that the smaller the contribution of
higher frequencies, the greater is the accuracy.
But, in order to achieve a comparable accuracy with 1st - and 2nd -order systems
we would have to use R" %!*' and R# "!#% frequency samples, respectively.
Thus since we would like to minimize the length of the frequency vector, low order
systems need to be artificially rolled off at the high end. This can be done by
multiplying the frequency response by a suitable window function, element by element,
as shown in Fig. 6.5.4. The window function used in Fig. 6.5.4 (and also in the TRESP
routine) is a real valued Hamming type of window (note that we need only its right
hand half, since we use a single-sided spectrum; the other half is implicitly used owing
to Eq. 6.5.12).
W=0.54-0.46*cos(2*pi*(N+1:1:2*N)/(2*N)); % right half Hamming window
-6.42-
P. Stari, E. Margan
Computer Algorithms
If the system order and type could be precisely identified, this error might be
corrected by forcing the first point of the 1st -order impulse response to a value equal to
the sampling time period, multiplied by the system DC gain, as has been done in the
TRESP routine.
In Sec. 6.5.6 we give a more detailed error analysis for the 1st -, 2nd - and 3rd order system for both windowed and non-windowed spectrum.
1.0
Normalized amplitude
0.8
(Hamming window)
0.6
0.4
0.2
abs( F(jw) )
0
abs( W .* F(jw) )
0
10
Frequency :
15
20
25
w = (0:1:255)/8
30
35
Fig. 6.5.4: Windowing example. The 1st -order frequency response (only the
magnitude is shown on plot) is multiplied elemet by element by the Hamming
type of window function in order to reduce the influence of high frequencies
and improve the impulse response calculation accuracy. Note that the window
function is real only, affecting equally the system real and imaginary part, thus
the phase information is preserved and only the magnitude is corected.
-6.43-
P. Stari, E. Margan
Computer Algorithms
0 > J !
R
(6.5.14)
By default, the TRESP routine (see below) returns the impulse amplitude in the
same way, representing a unity gain systems response. Optionally, we can denormalize
it as if the response was caused by an ideal, infinitely high impulse; then the "st -order
response starts the exponential decay from a value very close to one, as it should. If the
systems half power bandwidth, =h , is found at the m+1 element of the frequency
response vector, the amplitude denormalization factor will be:
E
R
#1 m
(6.5.15)
The #1 factor comes as a bit of surprise here. See Sec. 6.5.5 about time scale
normalization for an explanation.
The term m can be entered explicitly, as a parameter. But it can also be derived
from the frequency vector by finding the index at which it is equal to 1, or it can be
found by examining the magnitude and finding the index of the point, nearest to the half
power bandwidth value (in both of cases the index must be decremented by 1).
Another problem can be encountered with high order systems, which exhibit a
high degree of ringing, e.g., Chebyshev systems of order 8 or greater. If m<8, some
additional ringing is introduced into the time domain response. This ringing results
from the time frame implicitly repeating with a period X #1?=, where ?=
describes the finite spectral density of input data. If we have specified the system cut off
frequency too near to the origin of the frequency vector it would cause a time scale
expansion. Thus overlapping of adjacent responses will introduce distortion if the
impulse response has not decayed to zero by the end of the period X . Therefore the
choice of placing the cut off frequency relative to the frequency vector is a compromise
between the pass band and stop band description. In Matlab, the frequency vector of N
linearly spaced frequencies, normalized to 1 at its m+1 element, can be written as:
N=256;
m=8;
w=(0:1:N-1)/m;
The variable m specifies the normalized frequency unit. The transient response
of both Butterworth and Bessel systems can be calculated with good accuracy by using
a frequency vector normalized to 1 at its 5th sample (m=4). But by placing the cutoff
frequency at the 9th sample (m=8) of a frequency vector of length 256, an acceptably
low error will be achieved even for a 10th -order Chebyshev system. For higher order,
high ringing systems one will probably need to increase the frequency vector to 512 or
1024 elements in order to prevent time window overlapping.
-6.44-
P. Stari, E. Margan
Computer Algorithms
Y(1)=X(1);
for k=2:max(size(X))
Y(:,k)=Y(k-1)+X(k);
end
-6.45-
P. Stari, E. Margan
Computer Algorithms
% delta-t
% normalized length-N time vector
-6.46-
(6.5.17)
P. Stari, E. Margan
Computer Algorithms
Now, normalizing the time scale means showing it in increments of the system
time constant (VG , #VG , $VG , ). Thus we simply set VG ". For the starting
sample at > !, the response 0 ! ", so in order to obtain the response of a unity
gain system excited by a finite amplitude impulse we must denormalize the amplitude
(see Eq. 6.5.15) by "E:
0r >
" >
e
E
(6.5.18)
z=[];
% no zeros,
p=-1;
% just a single real pole
N=256;
% total number of samples
m=8;
% samples in the frequency unit
w=(0:1:N-1)/m;
% the frequency vector
dt=2*pi*m/N;
% sampling time interval = 1/A
t=dt*(0:1:N-1);
% the time vector
F=freqw(z,p,w);
% the frequency response
In=(2*real(fft(conj(F)))-1)/N;
% the impulse response
Ir=dt*exp(-t);
% 1st-order ref., denormalized
plot( t, Ir, t, In )
title('Ideal vs. windowed response'), xlabel('Time')
plot( t(1:30), Ir(1:30), t(1:30), In(1:30) )
title('Zoom first 30 samples')
In the above example (see the plot in Fig. 6.5.5), we see that the final values of
normalized impulse response In are not approaching zero, and by zooming on the first
30 points we can also see that the first point is too low and the rest somewhat lower
than the reference. Windowing can correct this:
W=0.54-0.46*cos(2*pi*(N+1:2*N)/(2*N));
% right half Hamming window
Iw=(2*real(fft(conj(F.*W)))-1)/N;
% impulse, windowed fr.resp.
plot( t, Ir, t, Iw ), xlabel('Time')
title('Ideal vs. windowed response')
plot( t(1:30), Ir(1:30), t(1:30), Iw(1:30) )
title('Zoom first 30 samples')
This plot fits the reference much better. But the first point is still far too low.
From the amplitude denormalization factor, by which the reference was multiplied, we
know that the correct value of the first point should be "E ?>. So we may force the
first point to this value, but, by doing so, we would alter the sum of all values by
N*(dt-I(1)). In order to obtain the correct final value of the step response, the impulse
response requires the correction of all points by 1/(1+(dt-I(1))/N), as in the
following example:
% the following correction is valid for 1st-order system only !!!
er1=dt-I(1);
% the first point error
Iw(1)=dt;
% correct first-point amplitude
% note that with this we have altered the sum of all values by er1,
% so we should modify all the values by :
Iw=Iw*(1/(1+er1/N));
Ir=Ir*(1/(1+er1/N));
% the same could also be achieved by : Ix=Ix/sum(Ix);
plot( t(1:30), Ir(1:30), t(1:30), Iw(1:30) ), title('Zoom first 30')
plot( t, (Iw-Ir) ), title('Impulse response error plot')
-6.47-
P. Stari, E. Margan
Computer Algorithms
Likewise we can compare the calculated step response. Our reference is then:
0r > " e>
(6.5.19)
But if the first-order impulse response is numerically integrated the value of the
first sample of the step response will be equal to the value of the first sample of the
impulse response instead of zero, as it should be in the case of a low pass LTIC system.
Also, there is an additional problem resulting from numerical integration, which
manifests itself as a one half sample time delay. Remember what we have observed
when we derived the envelope delay from the phase: numerical differentiation has
assigned each result point to each difference pair of the original data, so that the
resulting vector was effectively shifted left in (log-scaled) frequency by a (geometrical)
mean of two adjacent frequency points, =8 =8" . Because we work with a linear
scale, the shift is the arithmentic mean, ?=#. Since the numerical integration is the
inverse process of differentiation, the signal is shifted right. However, whilst the
differentiation vector had one sample less, the numerical integration returns the same
number of samples, not one more.
So, in order to see the actual shape of the error, we have to compensate for this
shift of one half sample. We can do this by artificially adding a leading zero to the
impulse response vector, then cumulatively sum the resulting R " elements and
finally take the mean value of this and the version shifted by one sample, as in the
example below which is uses the vectors from above (see the result in Fig. 6.5.6):
Sr=1-exp(-t);
% 1st-order step response reference
Sw=cumsum([0, Iw]);
% step resp. from impulse + leading zero
% compensate the one half sample delay by taking the mean :
Sw=(Sw(1:N)+Sw(2:N+1))/2;
Sw(1)=0;
% correct the first value
plot( t, Sr, t, Sw ), title('Ideal vs. windowed step response')
plot( t(1:50), Sr(1:50), t(1:50), Sw(1:50) )
title('Zoom first 50 samples')
plot( t, (Sw-Sr) ), title('Step response error plot')
-6.48-
P. Stari, E. Margan
Computer Algorithms
Note that the TRESP routine allows us to enter the actual denormalized
frequency vector, in which case all (but the first one) of its elements might be greater
than 1. The normalized frequency unit m is then found from the frequency response, by
checking which sample is closest to abs(F(1))/sqrt(2), and then decremented by
1 to compensate for the frequency vector starting from 0. But in the case of a
denormalized frequency vector we should also denormalize the time scale, by dividing
the sampling interval by the actual upper cut off frequency, which is w(m+1).
To continue with our 5th -order Butterworth example, we can now calculate the
impulse and step response by using the TRESP routine in which we have included all
the above corrections:
[z,p]=buttap(5);
% the 5th-order Butterworth system poles
w=(0:1:255)/8;
% form a linearly spaced frequency vector
F=freqw(z,p,w);
% the frequency response at w
[I,t]=tresp(F,w,'i'); % I : ideal impulse, t : normalized time
S=tresp(F,w,'s');
% S : step response ( time same as for I )
plot(t(1:100),I(1:100),t(1:100),S(1:100))
% plot 100 points of I and S vs. t
The results should look just like Fig. 6.5.1. Here is the TRESP routine:
function [y,t]=tresp(F,w,r,g)
%TRESP
Transient RESPonse, using Fast Fourier Transform algorithm.
%
Call : [y,t]=tresp(F,w,r,g);
%
where:
%
F --> complex-frequency response, length-N vector, N=2^B, B=int.
%
w --> can be the related frequency vector of F, or it
%
can be the normalized frequency unit index, or it
%
can be zero and the n.f.u. index is found from F.
%
r --> a character, selects the response type returned in y:
%
- 'u' is the unity area impulse response (the default)
%
- 'i' is the ideal impulse response
%
- 's' is the step response
%
g --> an optional input argument: plot the response graph.
%
y --> the selected system response.
%
t --> the normalized time scale vector.
% Author :
% ----------- Preparation and checking the input data -----------if nargin < 3
r='u';
% select the default response if not specified
end
G=abs(F(1));
% find system DC gain
N=length(F);
% find number of input frequency samples
v=length(w);
% get the length of w
if v == 1
m=w;
% w is the normalized frequency unit or zero
elseif v == N
% find the normalized frequency unit
m=find(abs(w-1)==min(abs(w-1)))-1;
if isempty(m)
m=0;
% not found, try from the half power bandwidth
end
else
error('F and w are expected to be of equal length !');
end
if m == 0
% find the normalized frequency unit index
m=max(find(abs(F)>=G/sqrt(2)))-1;
end
-6.49-
P. Stari, E. Margan
Computer Algorithms
% check magnitude slope between the 2nd and 3rd octave above cutoff
M=abs(diff(20*log10(abs(F(1+4*m*[1,2])))));
x=3;
% system is 3rd-order or higher (>=18dB/2f)
if M < 15
x=2;
% probably a 2nd-order system (12dB/2f)
elseif M < 9
x=1;
% probably a 1st-order system (6dB/2f)
end
% ----------- Form the window function --------------------------if x < 3
W=0.54-0.46*cos(2*pi*(N+1:2*N)/(2*N)); % right half Hamming
F=W.*F;
% frequency response windowed
end
% ----------- Normalize
A=2*pi*m;
dt=A/N;
if v == N
dt=dt/w(m+1);
end
t=dt*(0:1:N-1);
-6.50-
P. Stari, E. Margan
Computer Algorithms
-6.51-
P. Stari, E. Margan
Computer Algorithms
1.0
Ir, Iw
0.8
0.07
Ir - Reference response
0.6
0.06
En - Error: abs(In-Ir)
Error
Normalized Amplitude
0.08
Ew - Error: abs(Iw-Ir)
0.05
0.04
0.4
0.03
0.2
0.02
En
0.01
0
Ew
0
0
1
t / T0
1.0
Sw, Sr
Sn
Sr - Reference response
Sw - Step-response from windowed frequency response
0.6
0.4
0.04
Ew - Error: abs(Sw-Sr)
Error
Normalized Amplitude
0.8
En
0.2
0.03
0.02
0.01
Ew
0
0
t / T0
0
4
Fig. 6.5.5 and 6.5.6: The first 30 points of a 256 sample long 1st -order impulse
and step response vs. the analytically calculated references. The error plots En
and Ew are enlarged 10. Although the impulse response, calculated from the
normal frequency response, has a relatively small error, it integrates to an
unacceptably high value (4%) in the step response. In contrast, by windowing the
frequency response, both time domain errors are much lower, the step response
final value is in error by les than 0.2%.
-6.52-
P. Stari, E. Margan
Computer Algorithms
0.6
Iw - Impulse response from windowed frequency response
In - Impulse response from normal frequency response
0.5
Iw
0.04
En - Error: abs(In-Ir)
0.4
Ew - Error: abs(Iw-Ir)
0.3
0.03
0.2
0.02
Error
Normalized Amplitude
0.05
Ir - Reference response
Ir, In
0.1
Ew
En
0.01
0
0.1
0
t / T0
1.2
1.0
0.8
0.006
0.6
Sw - Step-response from windowed frequency response
Sn - Step-response from normal frequency response
Sr - Reference response
0.4
En
0.2
En - Error:
abs(Sn-Sr)
Ew - Error:
abs(Sw-Sr)
Error
Normalized Amplitude
Sr, Sn, Sw
0.004
0.002
Ew
0
t / T0
Fig. 6.5.7 and 6.5.8: As in Fig. 6.5.5 and 6.5.6, but with 40 samples of a 2nd -order
Butterworth system. The impulse response error for the windowing procedure is higher
at the beginning, but falls off more quickly, therefore the step response final value error
is still much lower (note that the step response error plots are enlarged 100). The
oscillations in error plots, owed to the Gibbs effect, also begin to show.
-6.53-
P. Stari, E. Margan
Computer Algorithms
0.5
0.004
Ir, In
Iw
En - Error: abs(In-Ir)
0.3
0.003
Ew - Error: abs(Iw-Ir)
Error
Normalized Amplitude
0.4
Ew
0.2
0.002
0.001
0.1
En
0.1
0
4
t / T0
10
1.2
Sr, Sn, Sw
0.8
0.6
Sr - Reference response
Sn - Step-response from normal frequency response
Sw - Step-response from windowed frequency response
0.004
0.4
Ew - Error: abs(Sw-Sr)
En - Error:
0.2
abs(Sn-Sr)
Error
Normalized Amplitude
1.0
0.002
Ew
En
0
0
4
t / T0
0
5
10
Fig. 6.5.9 and 6.5.10: As in Fig. 6.5.58, but with 50 samples of a 3rd -order
Butterworth system. Windowing does not help any longer and produces even
greater error. The dominant error is now owed to the Gibbs effect.
-6.54-
P. Stari, E. Margan
Computer Algorithms
Normalized Amplitude
1.0
0.8
0.6
0.4
1
0.2
0.5
1.0
1.5
Normalized Time
2.0
2.5
3.0
-6.55-
P. Stari, E. Margan
Computer Algorithms
$ : D4
$ :3
<5 :p:
lim : :5
5
3"
8
$ : :3
3"
4"
7
e:5 >
(6.6.1)
$ D4
4"
-6.57-
P. Stari, E. Margan
Computer Algorithms
[z,p]=buttap(5);
t=(0:1:300)/15;
I=atdr(z,p,t,'i');
S=atdr(z,p,t,'s');
plot(t,I,t,S)
%
%
%
%
%
The resulting plot should look the same as in Fig. 6.5.1 (but with a much better
accuracy!).
-6.58-
P. Stari, E. Margan
Computer Algorithms
function y=atdr(z,p,t,q)
%ATDR Analytical Time Domain Response by simplified residue calculus
%
(does not work for systems with multiple poles).
%
y=atdr(z,p,t) or
%
y=atdr(z,p,t,'n') returns the normalized impulse response of a
%
unity gain system, specified by zeros z and
%
poles p in time t.
%
y=atdr(z,p,t,'i') returns the impulse response, denormalized to
%
the ieal impulse input.
%
y=atdr(z,p,t,'s') returns the step response of the system.
%
%
Specify the time as : t=(0:1:N-1)/T, where N is the number of
%
desired time domain samples and T is the number of samples in
%
the time scale unit, i.e.: t=(0:1:200)/10
% Author :
if nargin==3
q='n' ;
% by default, return the unity gain impuse response
end
n=max(size(p));
% find the number of poles
for k=1:n
% test for repeating poles
P=p;
P(k)=[ ];
% exclude the pole currently tested
if all(abs(P-p(k)))==0 % is there another such pole ?
error('ATDR cannot handle systems with repeating poles!')
end
end
dc=1;
% set low pass system flag
if isempty(z)
Z=1;
% no zeros
else
% zeros
if any( abs(z) <1e-6 )
dc=0; % HP or BP system, clear dc flag
end
if all( abs( real( z ) ) < 1e-6 )
z = j * imag( z ) ;
% all zeros on imaginary axis
end
Z=ones(size(p)) ;
if dc
for k=1:n
Z(:,k)=prod(p(k)-z)/prod(-z);
end
else
for k = 1:np
for h = 1:nz
if z(h) == 0
Z(k,:) = Z(k,:)*p(k) ;
else
Z(k,:) = Z(k,:)*(p(k)+z(h))/z(h) ;
end
end
end
end
Z=Z(:);
% column-wise orientation
end
if n == 1
D=1;
% single pole case
else
for k = 1:n
d=p(k)-p;
% difference, column orientation
d(k)=[ ];
% k-th element = 0, eliminate it
D(:,k)=d;
% k-th column of D
-6.59-
P. Stari, E. Margan
Computer Algorithms
end
if n > 2
D=(prod(D)); % make column-wise product if D is a matrix
end
D=D.';
% column-wise orientation
end
P=prod(-p)*Z./D; % impulse residues
if q == 's'
P=P./p;
% if step response is required, divide by p
end
t=t(:).';
% time vector, row orientation
y=P(1)*exp(p(1)*t);
% response, first row
for k = 2:n
y=[y; P(k)*exp(p(k)*t)];
% next row
y=sum(y);
% sum column-wise, return a single row
end
y=real(y);
% result is real only (imaginary parts cancel)
if (q == 's') & ( isempty(z) | dc == 1 )
y=y+1;
% if step resp., add 1 for the pole at 0+j0
end
if ( q == 'i' | q == 'n' ) & ( dc == 0 )
y=-diff([0, y]); % impulse response of a high pass system
end
if q == 'n'
y=y/abs(sum(y)); % normalize impulse resp. to unity gain
end
-6.60-
P. Stari, E. Margan
Computer Algorithms
10 3
Bessel System
{s }
Butterworth System
{s }
2
2
3
2 10
Fig. 6.7.1: The Butterworth poles (on the unit cycle) and the
BesselThomson poles (on the fitted ellipse). Note that for the
same bandwidth (" kHz) the values of BesselThomson poles are
much larger, but with a lower ratio of the imaginary to the real part.
The frequency response plots are shown in Fig. 6.7.2. Note the equal pass band
(3 dB point) and equal slope at high frequencies. However, the Butterworth system
atenuation is an order of magnitude (20 dB) better.
-6.61-
P. Stari, E. Margan
Computer Algorithms
0
3 dB
20
5 pole
Butterworth
System
5 pole
Bessel-Thomson
System
40
M [dB]
60
80
100
0.1
1.0
10.0
f [kHz]
Fig. 6.7.2: Frequency responses of the Butterworth and BesselThomson system. For an
equal cut off frequency (0h " kHz), the Butterworth system stop band attenuation is
about an order of magnitude (10 or 20 dB) better than that of the BesselThomson.
Using the same poles and the ATDR routine, we compare the step responses:
t=(0:1e-5:3);
% the 3ms time vector, 100 samples/ms.
y1=atdr(z1,p1,t,'s'); % Butterworth step response
y2=atdr(z2,p2,t,'s'); % Bessel-Thomson step response
plot(t*1000,y1,'-r',t*1000,y2,'-b'), xlabel('t [ms]') % see Fig.6.7.3
1.2
1.0
0.8
5 pole
Bessel-Thomson
System
5 pole
Butterworth
System
0.6
0.4
0.2
0
0.5
1.0
1.5
2.0
2.5
3.0
t [ms]
Fig. 6.7.3: Step responses of the Butterworth and BesselThomson system. For the same cut off
frequency (1 kHz) the BesselThomson systems delay is smaller; the overshoot is only 0.4% and
there is no ringing, so settling down to 0.1% occurs within the first 1 ms. Although the rise times
are nearly equal, the Butterworth system is a poor choice if time domain performance is required,
since it settles down to 0.1% only after some 5 ms (but Chebyshev and Elliptic filter systems are
even much worse in this respect).
-6.62-
P. Stari, E. Margan
Computer Algorithms
Rsum of Part 6
The algorithms shown are small, simple, easy to use, and fast in execution. They
are ideal for starting the systems design from scratch, to specify the design goals, as
well as to provide a reference with which a realized prototype can be compared.
We have shown how the system performance can easily be evaluated by using
the routines developed for Matlab, the prediction of system time domain response in
particular. We also hope that the development and application examples of these
routines offer a deeper insight on how the system should be designed as a whole.
Still, the reader as the future systems designer is being let down at the most
demanding task of finding the circuitry and hardware that will perform as required, and
engineering experience is the only help here. This book should help to understand how
it might be possible to push the bandwidth up, smooth the transient, and reduce the
settling time. But there are also many other important parameters which must be
carefully considered when designing an amplifier, such as noise, linearity, electrical and
thermal stability, output power, slew rate limiting, the time it takes to recover from
overdrive, etc.
However, these parameters (with the exception of electrical stability) are in
most cases independent of the system pole and zero locations, but are strongly
influenced by the circuits topology and by the type of active devices used for the
realization.
Once the design goals have been set and the circuit configuration selected,
performance verification and iterative finalization can then be done using one of the
many CAD/CAE programs available on the market.
To see the numerical convolution routine and calculation examples and an
actual amplifierfilter system design example calculated using the algorithms
developed so far, please turn to Part 7.
-6.63-
P. Stari, E. Margan
Computer Algorithms
References:
[6.1]
[6.2]
[6.3]
[6.4]
[6.5]
[6.6]
[6.7]
Paul J. Nahin, Oliver Heaviside: The Life, Work, and Times of an Electrical Genius
of the Victorian Age, Johns Hopkins Univ. Pr., October 2002, ISBN: 0801869099
[6.8]
[6.9]
H. Nyquist, <http://www.ieee.org/organizations/history_center/legacies/nyquist.html>
[6.10]
[6.11]
[6.12]
[6.13]
[6.14]
O. Follinger
, Laplace und FourierTransformation,
[6.15]
[6.16]
J.W. Cooley & J.W. Tukey, An Algorithm for the Machine Calculation of Complex Fourier
Series, Math. of Computation, Vol. 19, No. 90, April 1965, pp. 297301
[6.17]
[6.18]
[6.19]
[6.20]
[6.21]
R.I. Ross, Iterative Transient Response Calculation Procedures that have Low Storage
Requirement, presented at the Second International Symposium on Network Theory,
Herceg-Novi, BiH, 1972
[6.22]
[6.23]
-6.65-
P. Star, E. Margan:
Wideband Amplifiers
Part 7:
P.Stari, E.Margan
- 7.3 -
P.Stari, E.Margan
List of Figures:
Fig. 7.1.1:
Fig. 7.1.2:
Fig. 7.1.3:
Fig. 7.1.4:
Fig. 7.1.5:
Fig. 7.1.6:
7.11
7.12
7.13
7.14
7.14
7.15
List of Routines:
VCON (Numerical Convolution Integration) ..................................................................................... 7.8
ALIAS (Alias Frequency of a Sampled Signal) ................................................................................ 7.19
- 7.4 -
P.Stari, E.Margan
7.0 Introduction
In Part 6 we have developed a few numerical algorithms that will serve us as
the basis of system analysis and synthesis. We have shown how simple it is to
implement the analytical expressions related to the various aspects of system
performance into compact, fast executing computer code which reduces the tedious
mathematics to pure routine. Of course, a major contribution to this easiness was
provided by the programming environment, a high level, mathsoriented language
called Matlab (Ref. [7.1]).
As wideband amplifier designers, we want to be able to accurately predict
amplifier performance, particularly in the time-domain. With the algorithms
developed, we now have the essential tools to revisit some of the circuits presented in
previous parts, possibly gaining a better insight into how to put them to use in our
new designs eventually.
But the main purpose of Part 7 is to put the algorithms in a wider perspective.
Here we intentionally use the term system, in order to emphasize the high degree of
integration present in modern electronics design, which forces us to abandon the old
paradigm of adding up separately optimized subsystems into the final product;
instead, the design process should be conceived to optimize the total system
performance from the start. As more and more digital processing power is being built
in to modern products, the analogue interface with the real world needs to be given
adequate treatment on the system level, so that the final product eventually becomes a
successful integration of both the analog and the digital world.
Now, we hear some of you analog circuit designers asking in a low voice
why do we need to learn any of this digital stuff? The answer is that digital
engineers would have a hard time learning the analog stuff, so there would be no one
to understand the requirements and implications of a decent AD or DA interface. On
the other hand, for an analog engineer learning the digital stuff is so simple, almost
trivial, and it pays back well with better designs and it acquires for you the respect due
from fellow digital engineers.
- 7.5 -
P.Stari, E.Margan
(7.1.1)
>!
where 7 is a fixed time constant, its value chosen so that 0 > is time reversed.
Usually, it is sufficient to make 7 large enough to allow the system impulse response
0 > to completely relax and reach the steady state again (not just the first zerocrossing point!).
If B> was applied to the system at >0 , then this can be the lower limit of
integration. Of course, the time scale can always be renormalized so that >0 !. The
upper integration limit, labeled >1 , can be wherever needed, depending on how much
of the input and output signal we are interested in.
Now, in Eq. 7.1.1 .> is implicitly approaching zero, so there would be an
infinite number of samples between >0 and >1 . Since our computers have a limited
amount of memory (and we have a limited amount of time!) we must make a
compromise between the sampling rate and the available memory length and adjust
them so that we cover the signal of interest with enough resolution in both time and
1
Bounded input p bounded output. This property is a consequence of our choice of basic mathematical
assumptions; since our math tools were designed to handle an infinite amount of infinitesimal
quantities, BIBO is the necessary condition for convergence. However, in the real analog world, we
are often faced with UBIBO requirements (unbounded input), i.e., our instrumentation inputs must be
protected from overdrive. Interestingly, the inverse of BIBO is in widespread use in the computer
world, in fact, any digital computer is a GIGO type of device ( garbage in p garbage out; unbounded!).
2
Linearity, Time Invariance, Causality. Although some engineers consider oscillators to be acausal,
there is always a perfectly reasonable cause why an amplifier oscillates, even if we fail to recognise it
at first.
- 7.7 -
P.Stari, E.Margan
amplitude. So if Q is the number of memory bytes reserved for B>, the required
sampling time interval is:
>1 >0
?>
(7.1.2)
Q
Then, if ?> replaces .> , the integral in Eq. 7.1.1 transforms into a sum of Q
elements, B> and C> become vectors x(n) and y(n), where n is the index of a
signal sample location in memory, and 0 7 > becomes f(m-n), with m=length(f),
resulting in:
M
(7.1.3)
n=1
Here ?> is implicitly set to 1, since the difference between two adjacent
memory locations is a unit integer. Good book-keeping practice, however,
recommends the construction of a separate time scale vector, with values from >0 to
>1 , in increments of ?> between adjacent values. All other vectors are then plotted
against it, as we have seen done in Part 6.
7.1.2 Numerical Convolution Algorithm
In Part 1 we have seen that solving the convolution integral analytically can be
a time consuming task, even for a skilled mathematician. Sometimes, even if B> and
0 > are analytic functions, their product need not be elementarily integrable in the
general case. In such cases we prefer to take the _ transform route; but this route can
sometimes be equally difficult. Fortunately numerical computation of the convolution
integral, following Eq. 7.1.3 , can be programmed easily:
function y=vcon(f,x)
%VCON
Convolution, step-by-step example. See also CONV and FILTER.
%
%
Call :
y=vcon(f,x);
%
where:
x(t) --> the input signal
%
f(t) --> the system impulse response
%
y(t) --> the system response to x(t) by convolving
%
f(t) with x(t).
%
If length(x)=nx and length(f)=nf, then length(y)=nx+nf-1.
%
- 7.8 -
P.Stari, E.Margan
To get a clearer view of what the VCON routine is doing , let us write a short
numerical example, using a 6-sample input signal and a 3-sample system impulse
response, and display every intermediate result of the matrix y in VCON:
x=[0 1 3 5 6 6];
f=[1 3 -1];
vcon(x,h);
%
%
%
%
%
%
%
0
1
6
13
18
19
12
-6
0
0
0
-1
-3
-5
-6
-6
after 2 iterations (because f is only 3 samples long)
the result is the first row of y:
0
1
6
13
18
19
12
-6
actually, the result is only the first 6 elements:
0
1
6
13
18
19
since there are only 6 elements in x, the process assumes the rest
to be zeros. So the remaining two elements of the result represent
the relaxation from the last value (19) to zero by the integration
of the system impulse response f.
( *) ==>
( *) ==>
( *) ==>
-1
0
1
0
1
1
1
-1
0
3
0
0
-1
0
1
3
3
3
1
3
1
-1
0 -1
3
3
9
5
1
5
( *) ==>
( +) ==> 0
( +) ==> 1
6
6
( +) ==> 6
6
( +) ==> 13
% ......... etc.
For convolution Matlab has a function named CONV, which uses a built in
FILTER command to run substantially faster, but then the process remains hidden
from the user; however, the final result is the same as with VCON. Another property
of Matlab is the matrix indexing, which starts with 1 (see the lower limit of the sum
- 7.9 -
P.Stari, E.Margan
symbol in Eq. 7.1.3), in contrast to most programming languages which use memory
pointers (base address + offset, the offset of the arrays first element being 0).
7.1.3 Numerical Convolution Examples
Let us now use the VCON routine in a real life example. Suppose we have a
gated sine wave generator connected to the same 5th -order Butterworth system which
we inspected in detail in Part 6 . Also, let the Butterworth systems half power
bandwidth be 1 kHz, the generator frequency 1.5 kHz, and we turn on the gate in the
instant the signal crosses the zero level. From the frequency response calculations, we
know that the forced response amplitude (long after the transient) will be:
Aout=Ain*abs(freqw(z,p,1500/1000));
where z are the zeros and p are the poles of the normalized 5th -order Butterworth
system; the signal frequency is normalized to the systems cut off frequency.
But how will the system respond to the signals turn on transient? We can
simulate this using the algorithms we have developed in Part 6 and VCON:
fh=1000;
fs=1500;
t=(0:1:300)/(50*fh);
nt=length(t);
[z,p]=buttap(5);
p=2*pi*fh*p;
Ir=atdr(z,p,t,'n');
d=25;
y=vcon(Ir,x);
% convolve x with Ir ;
The convolution result, compared to the input signal and the system impulse
response, is shown in Fig. 7.1.1 .
Note that we have plotted only the first nt samples of the convolution result;
however, the total length of y is length(x)+length(Ir)-1 , or one sample less that
the sum of the input signal and the system response lengths. The first length(x)=nt
samples of y represent the systems response to x, whilst the remaining
length(Ir)-1 samples are the consequence of the system relaxation: since there are
no more signal samples in x after the last point x(nt), the convolution assumes that
the input signal is zero and calculates the system relaxation from the last signal value.
- 7.10 -
P.Stari, E.Margan
x (t )
0.6
Ir ( t )
0.2
0.2
(t )
0.6
1.0
0
4
3
5
6
t [ms]
Fig. 7.1.1: Convolution example: response Ca>b to a sine wave Ba>b switched-on into
a 5th -order Butterworth system, whose impulse-response is Mr a>b, shown here as the
ideal response (instead of unity gain); both are delayed by the same switchon time
(!& ms). The system responds by phase shifting and amplitude modulating the first
few wave periods, reaching finally the forced (steady state) response.
2
- 7.11 -
P.Stari, E.Margan
The resulting step response, shown in Fig. 7.1.2, should be identical to that of
Fig. 6.1.11, Part 6, neglecting the initial 0.5 ms (25 samples) time delay and the
different time scale:
1.2
h (t )
1.0
0.8
(t )
0.6
0.4
0.2
Ir(t )
0
0.2
3
t [ms]
Fig. 7.1.2: Checking convolution: response Ca>b of the 5th -order Butterworth
system to the unit step 2a>b. The systems impulse response Mr a>b is also shown, but
in its ideal size (not unity gain). Apart from the 0.5 ms (25-samples) time delay and
the time scale, the step response is identical to the one shown in Part 6, Fig.6.1.11.
We can now revisit the convolution integral example of Part 1, Sec. 1.14,
where we had a unit-step input signal, fed to a two-pole Bessel-Thomson system,
whose output was in turn fed to a two-pole Butterworth system. The commands in the
following window simulate the process and the final result of Fig. 1.14.1 . But this
time, let us use the frequency to time domain transform of the TRESP (Part 6) routine.
See the result in Fig. 7.1.3 and compare it to Fig. 1.14.1g .
[z1,p1]=bestap(2,'t');
[z2,p2]=buttap(2);
N=256;
m=4;
w=(0:1:N-1)/m;
F1=freqw(p1,w);
F2=freqw(p2,w);
%
%
%
%
%
%
%
[S1,t]=tresp(F1,w,'s');%
I2=tresp(F2,w,'u');
%
%
d=max(find(t<=15));
%
I2=I2(1:d);
%
- 7.12 -
P.Stari, E.Margan
1.2
1.0
0.8
S1( t )
(t )
0.6
0.4
0.2
I2 ( t )
0
0.2
0
t [s]
10
15
- 7.13 -
P.Stari, E.Margan
[z,p]=bestap(5,'n');
p=p*2*pi*1e+6;
F=freqw(z,p,2*pi*f);
a=max(find(t<=5e-5));
b=min(find(t>=20e-6));
plot( t(a:b), g(a:b), '-g', t(a:b), y(a:b), '-b' )
xlabel('Time [\mus]') % see Fig.7.1.6
1.5
R (t )
1.0
0.5
0.0
0.5
1.0
1.5
10
20
30
t [ s]
40
50
60
Fig. 7.1.4: Input signal example used for the spectral-domain convolution
example (first 1200 samples of the 2048 total record length)
1.0
|G( f ) |
0.8
0.6
|F ( f )|
0.4
|Y ( f ) |
0.2
0.0
4
5
6
7
8
f [MHz]
Fig. 7.1.5: The spectrum Ka0 b of the signal in Fig. 7.1.4a is multiplied by the
systems frequency response J a0 b to produce the output spectrum ] a0 b. Along
with the modulated signal centered at 560 kHz, there is a strong 2.8 MHz
interference from another source and a high level of white noise (rising with
frequency), both being substantially reduced by the filter.
- 7.14 -
P.Stari, E.Margan
1.5
R (t )
1.0
(t )
0.5
0.0
0.5
1.0
1.5
10
t [ s]
15
20
Fig. 7.1.6: The output spectrum is returned to time domain as Ca>b and is
compared with the input signal Va>b, in expanded time scale. Note the small
change in amplitude, the reduced noise level and the envelope delay (approx.
"% period time shift), with little change in phase. The time shift is equal to "#
the number of samples of the filter impulse response.
Fig. 7.1.6 illustrates the dramatic improvement in signal quality that can be
achieved by using Bessel filters.
In MRI systems the test object is put in a strong static magnetic field. This
causes the nucleons of the atoms in the test object to align their magnetic spin to the
external field. Then a short RF burst, having a well defined frequency and duration, is
applied, tilting the nucleon spin orientation perpendicular to the static field (this
happens only to those nucleons whose resonant frequency coincides with that of the
RF burst).
After the RF burst has ceased, the nucleons gradually regain their original spin
orientation in a top-like precessing motion, radiating away the excess electromagnetic
energy. This EM radiation is picked up by the sensing coils and detected by an RF
receiver; the detected signal has the same frequency as the excitation frequency, both
being the function of the static magnetic field and the type of nucleons. Obviously the
intensity of the detected radiation is proportional to the number of nucleons having
the same resonant frequency3 .
In addition, since the frequency is field dependent a small field gradient can be
added to the static magnetic field, in order to split the response into a broad spectrum.
The shape of the response spectral envelope then represents the spatial density of the
specific nucleons in the test object. By rotating the gradient around the object the
recorded spectra would represent the sliced view of the object from different angles.
A computer can be used to reconstruct the volumetric distribution of particular atoms
through a process called back-projection (in effect, a type of spatial convolution).
From this short description of the MRI technique it is clear that the most vital
parameter of the filter, applied to smooth the recorded signal, is its group delay
flatness. Only a filter with a group delay being flat well into the stop band will be able
to faithfully deliver the filtered signal, preserving its shape both in the time and the
frequency domain and BesselThomson filters are ideal in this sense. Consequently a
sharper image of the test object is obtained.
3
The 1952 Nobel prize for physics was awarded to Felix Bloch and Edward Mills Purcell for their
work on nuclear magnetic resonance; more info at <http://nobelprize.org/physics/laureates/1952/>.
- 7.15 -
P.Stari, E.Margan
P.Stari, E.Margan
The purpose of filtering the signal above the Nyquist frequency is to avoid
aliasing. Fig. 7.2.1 shows a typical situation resulting in a signal frequency alias in
relation to the sampling clock frequency.
5
1
A
0
S
5
t
10
Fig. 7.2.1: Aliasing (frequency mirroring). A high frequency signal S, sampled by an ADC
at each rising edge of the clock C of a comparably high frequency, can not be distinguished
from its low frequency alias A, which is equal to the difference between the clock and
signal frequency, 0a 0s 0c . In this figure, 0s *"! 0c , therefore 0a ""! 0c
(Yes, a negative frequency! This can be verified by increasing the clock frequency very
slightly and watch the aliased signal apparently moving backwards in time).
(7.2.1)
A wheel, rotating at the cycle frequency 0w equal to the picture rate 0p (or its
integer multiple or sub-multiple, 0w 8 0p 7, where 7 is the number of wheel
arms), would be perceived as stationary. Likewise, if an ADCs sampling frequency is
equal to the signal frequency (see Fig. 7.2.2), the apparent result is a DC level.
C
0
1
0
S
5
t
10
Fig. 7.2.2: Alias of a signal equal in frequency to the sampling clock looks like a DC.
- 7.18 -
P.Stari, E.Margan
0
S
5
t
10
Fig. 7.2.3: A signal frequency slightly higher than the sampling frequency aliases
into a low frequency, equal to the difference of the two (but now positive).
Here is the ALIAS routine for Matlab, by which we can calculate the aliasing
for any clock and signal frequency desired.
function fa=alias(fs,fc,phi)
% ALIAS calculates the alias frequency of a sampled sinewave signal.
%
Call :
fa = alias( fs, fc, phi ) ;
%
where:
fs is the signal frequency
%
fc is the sampling clock frequency
%
phi is the initial signal phase shift
if nargin < 3
phi = pi/3 ;
end
ofs = 2 ;
A = 1/ofs ;
m = 100 ;
% clock offset
% clock amplitude
% signal reconstruction factor is equal to
% the number of dots within a clock period
N = 1 + 10 * m ;
% total number of dots
dt = 1 / ( m * fc ) ;
% delta-t for time reconstruction
t = dt * ( 0 : 1 : N ) ;
% time vector
fa = fs - fc ;
% alias frequency (can be negative!)
clk = ofs + A * sign( sin( 2 * pi * fc *t ) ) ; % clock
sig = sin( 2 * pi * fs * t + phi ) ;
% sampled signal
sal = sin( 2 * pi * fa * t + phi ) ;
% alias signal
plot( t, clk, '-g',...
t, sig, '-b',...
t, sal, '-r',...
t(1:m:N), sig(1:m:N), 'or')
xlabel( 't' )
- 7.19 -
P.Stari, E.Margan
Of course, the sampled signal is more often than not a spectrum, either
discrete or continuous, and aliasing applies to a spectrum as well as to discrete
frequency signals. In fact, the superposition theorem applies here, too.
We have noted that a sampled spectrum is symmetrical about the sampling
frequency, because a signal, sampled by a clock with exactly the same frequency,
aliases as a DC level which depends on the initial signal phase shift relative to the
clock. However, something odd already happens at the Nyquist frequency, as can be
seen in Fig. 7.2.4 and Fig. 7.2.5. In both figures the signal frequency is equal to the
Nyquist frequency ("# the sampling frequency), but differs in phase. Although the
correct alias signal is equal in amplitude to the original signal, we perceive an
amplitude which varies with phase.
5
0
1
A
X
0
1
5
t
10
Fig. 7.2.4: When the signal frequency is equal to the Nyquist frequency,
there are two samples per period and the correct alias signal is of the same
amplitude as the original signal. However, the perceived alias amplitude is a
function of the phase difference between the signal and the clock. A "!
phase shift results in a low apparent amplitude, as shown by the X waveform.
S A
0
1
5
t
10
- 7.20 -
P.Stari, E.Margan
1.0
1.00
fh
0.8
0.6
sin T s
Ts
0.4
0.707
fN
fh
fa =
fs
0.10
0.2
F1 =
fN
0.707
fN
fs
10
1
j
1+
a
as
ym
pt o
te
sin T s
Ts
0.2
0.4
0.1
1.0
f / fN
10.0
0.01
0.1
1.0
f / fN
10.0
Fig. 7.2.6: The spectrum resulting from sampling a constant amplitude sinusoidal signal
varying in frequency from 0.10N to 100N follows the asin =Xs b=Xs function, where
Xs "0s . The function is shown in the linear vertical scale on the left and in the log of the
absolute value on the right. The first zero occurs at the Nyquist frequency, the second at the
sampling frequency and so on, at every Nyquist harmonic. Note that the effective sampling
bandwidth 0h is reduced to about 0.43 0N . The asymptote is the same as for a simple VG low
pass filter, #! dB"!0 with a cut off at 0a 0N "! .
The aliasing amplitude follows this same asin =Xs b=Xs function, from the
Nyquist frequency up. An important side effect is that the bandwidth is reduced to
about 0.43 0N . This can be taken into account when designing an input anti-aliasing
filter and partially compensate the function pattern below the Nyquist frequency by an
adequate peaking.
7.2.3 Better Anti-Aliasing With Mixed Mode Filters
By the term mixed mode filter we mean a combination of analog and digital
filtering which gives the same result as a single filter having the same total number of
poles. The simplest way to understand the design requirements and optimization, as
well as the advantages of such an approach, is by following an example.
Let us imagine a sampling system using an ADC with a 12 bit amplitude
resolution and a 50 ns time resolution (sampling frequency 0s #! MHz). The
number of discrete levels resolved by 12 bits is E #"# %!*'; the ADC relative
resolution level is simply "E, or in dB, + #! log"! "E (# dB. According to
the Shannon sampling theorem the frequencies above the Nyquist frequency
(0N 0s # "! MHz) must be attenuated by at least #"# to reduce the alias of the
high frequency spectral content (signal or noise) below the ADC resolution.
As we have just learned, the asin =Xs b=X= function of the alias spectrum
allows us to relax the filter requirements by some 4 bits (a factor of %& or "$ dB) at
the frequency 0.7 0s ; for a while, we are going to neglect this, leaving it for the end of
our analysis.
Let us also assume a % V peak to peak ADC input signal range and let the
maximum required vertical amplifier sensitivity be & mVdivision. Since oscilloscope
displays usually have 8 vertical divisions, this means %! mV of input for a full scale
display, or a gain of "!!. We would like to achieve the required gainbandwidth
product with either a two- or a three-stage amplifier. We shall assume a 5-pole filter
- 7.21 -
P.Stari, E.Margan
for the two-stage amplifier (a 3-pole and a 2-pole stage), and a 7-pole filter for the
three-stage amplifier (one 3-pole stage and two 2-pole stages). We shall also inspect
the performance of a 9-pole (four-stage) filter to see if the higher bandwidth (achieved
as a result of a steeper cut off) justifies the cost and circuit complexity of one
additional amplifier stage.
Now, if our input signal was of a square wave or pulse form, our main
requirement would be a shortest possible ADC aperture time and an analogue
bandwidth as high as possible. Then we would be able to recognize the sampled
waveform shape even with only 5 samples per period. But suppose we would like to
record a transient event having the form of an exponentially decaying oscillating
wave, along with lots of broad band noise, something like the signal in Fig. 7.1.4. To
do this properly we require both aliasing suppression of the spectrum beyond the
Nyquist frequency and preserving the waveform shape; the later requirement limits
our choice of filter systems to the BesselThomson family.
Finally, we shall investigate the possibility of improving the system bandwidth
by filtering the recorded data digitally.
We start our calculations from the requirement that any anti-alias filter must
have the attenuation at the Nyquist frequency 0N equal to the ADC resolution level.
Since we know that the asymptote attenuation slope depends on the system order 8
(number of poles) as 8 #! dB"!0 , we can follow those asymptotes from 0N back
to the maximum signal level; the crossing point then defines the system cut off
frequency 0h8 for each of the three filter systems.
Since we do not have an explicit relation between the BesselThomson filter
cut off and its asymptote, we shall use Eq. 6.3.10 for Butterworth systems to find the
frequency 0a at which the 5-, 7-, and 9-pole Butterworth filter would exhibit the
E #"# attenuation required. By using 0h 0N# 0a we can then find the Butterworth
cut off frequencies. Then by using the modified BesselThomson poles (those that
have the same asymptote as the Butterworth system of comparable order) we can find
the BesselThomson cut off frequencies which satisfy the no-aliasing requirement.
A=2^12;
fs=2e+7;
fN=fs/2;
M=1e+6;
%
%
%
%
- 7.22 -
P.Stari, E.Margan
We now find the poles and the system bandwidth of the 5-, 7-, and 9-pole
BesselThomson systems, which have their responses normalized to the same
asymptotes as the above Butterworth systems of equal order:
N=601;
% number of frequency samples
f=fN*logspace(-2,0,N); % length-N frequency vector, from 2 decades
% below fN to fN, in log-scale
w=2*pi*f;
% angular frequency
[z5,p5]=bestap(5,'a'); % Bessel-Thomson asymptote-normalized systems
[z7,p7]=bestap(7,'a');
[z9,p9]=bestap(9,'a');
p5=p5*2*pi*fa5;
p7=p7*2*pi*fa7;
p9=p9*2*pi*fa9;
3dB
10
Attenuation [dB]
20
30
f 5 = 1.16 MHz
f 7 = 1.66 MHz
40
f 9 = 1.97 MHz
50
60
70
80
ADC resolution: 72 dB
10.0
1.0
f [MHz]
Fig. 7.2.7: Magnitude vs. frequency of BesselThomson 5-, 7-, and 9-pole
systems, normalized to the attenuation of 2c12 (72 dB) at 0N (10 MHz).
0.1
- 7.23 -
P.Stari, E.Margan
Fig. 7.2.7 shows the frequency responses, calculated to have the same
attenuation, equal to the relative ADC resolution level of 72 dB, at the Nyquist
frequency. We now need their approximate 3 dB cut off frequencies:
m=abs(M5-3.0103);
x5=find(m==min(m));
m=abs(M7-3.0103);
x7=find(m==min(m));
m=abs(M9-3.0103);
x9=find(m==min(m));
[f5, f7, f9]=f([x5, x7, x9]);
Note that these values are much lower then the cutoff frequencies of the
asymptotes, owing to the more gradual rolloff of BesselThomson systems. Also,
note that a greater improvement in performance is achieved by increasing the system
order from 5 to 7 then from 7 to 9. We would like to have a confirmation of this fact
from the step responses (later, we shall also see how these step responses would look
when sampled at the actual sampling time intervals).
fs=2e+7;
t=(0:1:500)/fs;
% sampling frequency
% time vector (to calculate the rise times we need
% a much finer sampling then the actual 50 ns
S5=atdr(z5,p5,t,'s');
S7=atdr(z7,p7,t,'s');
S9=atdr(z9,p9,t,'s');
% Step responses
abs(S5-y10)
abs(S5-y90)
abs(S7-y10)
abs(S7-y90)
abs(S9-y10)
abs(S9-y90)
- 7.24 -
)
)
)
)
)
)
);
);
);
);
);
);
P.Stari, E.Margan
1.2
1.0
0.8
0.6
Rise times:
Tr9 = 178 ns
0.4
Tr7 = 213 ns
Tr5 = 302 ns
0.2
0.1
0.2
0.3
0.4
0.5
t [ s]
0.6
0.7
0.8
0.9
1.0
Fig. 7.2.8 : Step responses of the 5-, 7-, and 9-pole BesselThomson systems, having equal
attenuation at the Nyquist frequency. The rise times are calculated from the number of
samples between the 10% and 90% of the final value of the normalized amplitude.
We see that in this case the improvement from order 7 to order 9 is not so high
as to justify the added circuit complexity and cost of one more amplifying stage.
So let us say we are temporarily satisfied with the 7-pole filter system.
However, its 1.66 MHz bandwidth for a 12 bit ADC, sampling at 20 MHz, is simply
not good enough. Even the 1.96 MHz bandwidth of the 9-pole system is rather low.
The question is whether we can find a way around the limitations imposed by the antialiasing requirements?
Most ADC recording systems do not have to show the sampled signal in real
time. To the human eye a screen refreshing rate of 10 to 20 times per second is fast
enough for most purposes. Also many systems are intentionally made to record and
accumulate large amounts of data to be reviewed later. So on most occasions there is
plenty of time available to implement some sort of signal postprocessing.
We are going to show how a digital filter can be combined with the analog
anti-aliasing filter to expand the system bandwidth beyond the aliasing limit without
increasing the sampling frequency.
Suppose we could implement some form of digital filtering which would
suppress the alias spectrum below the ADC resolution and then we ask ourselves
what would be the minimum required pass band attenuation of such a filter. The
answer is simple: the filter attenuation must follow the inverse of the alias spectrum
envelope. But if we were to allow the spectrum around the sampling frequency to
alias, our digital filter would need to extend its attenuation characteristic back to DC.
Certainly this is neither practical nor desirable Therefore since 0s #0s , our
bandwidth improvement factor, let us call it F , must be lower than #.
- 7.25 -
P.Stari, E.Margan
So let us increase the filter cut off by F "($; the input spectrum would
then contain frequencies only up to F0N , which would alias back down to 0s F0N ,
in this case # "($ !#(0N . This frequency is high enough to allow the realization
of a not too demanding digital filter.
Let us now study the shape of the alias spectrum which would result from
taking our original 7-pole analog filter, denoted by J(o , and push it up by the chosen
factor F to J(b , as shown in Fig. 7.2.9. The spectrum WA between 0N and F0N is
going to be aliased below the Nyquist frequency into WB .
0
10
F7b
F7o
fN =
fs
fs
Attenuation [dB]
20
30
40
50
fs B fN
60
SA
SB
70
80
B fN
ADC resolution
10
f [MHz]
100
Fig. 7.2.9: Alias spectrum of a 7-pole filter with a higher cut off frequency. J(o is
our original 7-pole Bessel-Thomson analog filter, which crosses the 12-bit ADC
resolution level of (# dB at exactly the Nyquist frequency, 0N = 0W # 10 MHz.
This guaranties freedom from aliasing, but the bandwidth is rather low. If we move
it upwards by a factor F "($ to J(b , the spectrum WA will alias into WB . Note the
alias spectrum inversion: 0N remains in its place, whilst F0N is aliased to 0s F0N .
Note also that the alias spectral envelope has changed in comparison with the
original: in the loglog scale plot a linearly falling spectral envelope becomes curved
when aliased. This change of the spectral envelope is important, since it will allow
us to use a relatively simple filter response shape to suppress the aliased spectrum
below the ADC resolution.
Note that in the log frequency scale the aliased spectrum envelope is not
linear, even if the original one is (as defined by the attenuation characteristic of the
analog filter).
If we flip the spectrum WB up, as in Fig. 7.2.10, the resulting spectral envelope,
denoted by Jrq , represents the minimal attenuation requirement of a digital filter,
which would restore freedom from aliasing.
- 7.26 -
P.Stari, E.Margan
Frq
10
fs
20
Attenuation [dB]
F7o
30
F7b
fN
40
50
fs B fN
60
B fN
SB
70
80
ADC resolution
B
10
f [MHz]
100
Fig. 7.2.10: If we invert the alias spectrum WB the envelope of the resulting
spectrum Jrq represents the minimum attenuation requirement that a digital filter
should have in order to suppress the aliased spectrum below the ADC resolution.
The following procedure shows how to calculate and plot the various elements
of Fig. 7.2.9 and Fig. 7.2.10, and Jrq in particular, starting from the previously
calculated 7-pole BesselThomson system magnitude J(o :
% the following calculation assumes a log frequency scale and
% a linear in dB attenuation.
A=2^12;
% number of levels resolved by a 12-bit ADC
a=20*log10(1/A);
% ADC resolution, -72dB
Nf=601;
% number of frequency samples
f=logspace(6,8,Nf);
% frequency vector, 1 to 100 MHz
B=1.73;
% chosen bandwidth increase (max. 1.8)
% the original 7-pole filter magnitude crosses a at fN :
F7o=20*log10(abs(freqw(p7, 2*pi*f)));
% F7o shifted up by B to F7b :
F7b=20*log10(abs(freqw(p7*B, 2*pi*f)));
fA=B*fN;
% F7b crosses ADC resolution (a) at fA
xn=min(find(f>=fN));
% index of fN in f
xa=min(find(f>=fA));
% index of fA in f
Sa=F7a(xn:xa);
% source of the alias spectrum
fa=f(xn:xa);
% frequency band of Sa
Sb=F7a(xa:-1:xn);
% the alias spectrum, from fs-fA to fN
fb=fs-f(xa:-1;xn);
% frequency band of Sb
Frq=a-Sa;
% min. required dig.filt. magnitude in dB
fr=fb;
% in the same freq. range: fs-fa to fN
M=1e+6;
% MHz scale factor
semilogx( f/M,F7o,'-r', f/M,'-b', fa/M,Sa,'-y', fb/M,Sb,'-c',...
fr/M,Frq,'-m', [f(1),f(Nf)]/M,[a,a],'--k',...
[fN,fN],[-72,-5],':k', [fs,fs],[-80,0],':b' )
xlabel('f [MHz]')
- 7.27 -
P.Stari, E.Margan
As can be seen in Fig. 7.2.10, the required minimum attenuation Jrq is broad
and smooth, so we can assume that it can be approximated by a digital filter of a
relatively low order; e.g., if the analog filter has 7 poles, the digital one could have
only 6 poles. The combined system would then be effectively a 13-pole. Of course,
the digital filter reduces the combined systems cut off frequency, but it would still be
higher than in the original, non-shifted, analog only version. However, the main
problem is that the cascade of two separately optimized filters has a non-optimal
response and the shape of the transient suffers most. This can be solved by simply
starting from a higher order system, say, a 13-pole BesselThomson. Then we assign
7 of the 13 poles to the analogue filter and 6 poles to the digital one.
The 6 poles of the digital filter must be transformed into appropriate sampling
time delays and amplitude coefficients. This can be done either with z-transform
mapping, or simply by calculating its impulse response and use it for convolution
with the sampled input signal, as we shall do here.
But note that since now the 7 poles of the analog filter will be taken from a
13-pole system, they will be different from the 7-pole system discussed so far (see a
comparison of the poles in Fig. 7.2.11). Although the frequency response will be
different, the shape of the alias band will be similar, since the final slope is the same
in both cases. Nevertheless, we must repeat the calculations with the new poles.
The question is by which criterion do we choose the 7 poles from the 13. From
Jrq in Fig. 7.2.9 we can see that the digital filter should not cut sharply, but rather
gradually. Such a response could be achieved if we reserve the poles with the lower
imaginary part for the digital filter and assign those with high imaginary part to the
analog filter. But then the analog filter step response would overshoot and ring,
compromising the dynamic range. Thus, the correct design strategy is to assign the
real and every other complex conjugate pole pair to the analog filter, as shown below.
4
10 7
{s}
1
{s}
0
1
2
3
4
4
4 10 7
Fig. 7.2.11: The 13-pole mixed mode system, the analog part marked by , the
digital by +; compared with the original 7-pole analog only filter, marked by *.
Note the difference in pattern size (proportional to bandwidth!).
- 7.28 -
P.Stari, E.Margan
The values of the mixed mode filter poles for the analog and digital part are:
p7a :
p6d :
1e+7 *
1e+7 *
-3.0181
-2.8572
-2.8572
-2.3275
-2.3275
-1.1637
-1.1637
+
+
+
1.1751i
1.1751i
2.3850i
2.3850i
3.7353i
3.7353i
-2.9785
-2.9785
-2.6460
-2.6460
-1.8655
-1.8655
+
+
+
0.58579i
0.58579i
1.77250i
1.77250i
3.02640i
3.02640i
%
%
%
%
%
%
%
Nyquist frequency
MHz-us conversion factor
sampling frequency
ADC dynamic range
ADC resolution limit in dB
number of frequency samples
log-scaled frequency vector, 1 - 100 MHz
- 7.29 -
P.Stari, E.Margan
The result of the above plot operation (semilogx) can be seen in Fig. 7.2.12,
where we have highlighted the spectrum under the analog filter J(a beyond the
Nyquist frequency, its alias, and the inverted alias, which represents the minimum
required attenuation Jrq of the digital filter. Note how the 6-pole digital filters
response J6d just touches the Jrq . Probably it is just a coincidence that the bandwidth
increase factor F chosen for the analog filter is equal to $ (we have arrived at this
value by repeating the above calculation several times, adjusting F on each run). This
process can be easily automated by iteratively comparing the samples of Jrq and J6d ,
and increase or decrease the factor F accordingly.
10
fN
10
f h7
20
3dB
Frq
f h13
F7o
Attenuation [dB]
fs
F13
F7a
30
F6d
40
50
60
Sb
70
80
Sa
ADC resolution
10
100
f [MHz]
Fig. 7.2.12: The bandwidth improvement achieved with the 13-pole mixed mode filter.
The J(o is the original 7-pole analog only filter response, as in Fig. 7.2.9. The new
analog filter response J(a , which uses 7 of the 13 poles as shown in Fig. 7.2.11, was
first calculated to have the same 72 dB attenuation at the Nyquist frequency 0N (as
J(o ), and then all the 13 poles were increased by the same factor F "($ as before.
The resulting J(a response generates the alias spectrum Wb from its source Wa . The
envelope of the inverted alias spectrum Jrq sets the upper limit for the digital filters
response, J6d , required for effective alias suppression. The response J"$ is the total
analog + digital 13-pole system, which crosses the ADC resolution limit at 0R , and
suppresses the alias band below the ADC resolution level, which was the main goal. In
this way the systems cut off frequency has increased from "'' MHz for J(o to ##
MHz for the J"$ . Thus, a bandwidth improvement of about "$# is achieved (not very
much, but still significant note that there are ways of improving this further!).
As expected, the digital filter has reduced the system bandwidth below that of
analog filter; however, the analog digital systems response J"$ has a cut off
frequency 0h"$ well above the 0h( of the original analog only 7-pole response, the J(o :
07o "'' MHz
013 ## MHz
- 7.30 -
P.Stari, E.Margan
% plot the step responses with the 0.1 and 0.9 reference levels :
plot( t*M, g7o, '-k',...
t*M, g13, '-r',...
t([5, 25])*M, [0.1, 0.1], '-k',...
t([15, 45])*M, [0.9, 0.9], '-k' )
xlabel('t [us]')
1.2
t r7
1.0
0.8
0.6
g7o
0.4
0.2
g13
t r7
= 210 ns
t r13
= 155 ns
t r13
0.1
0.2
0.3
0.4
0.5
t [ s]
0.6
0.7
0.8
0.9
1.0
Fig. 7.2.13: Step response comparison of the original 7-pole analog only filter g(o and
the improved 13-pole mixed mode (7-pole analog + 6-pole digital) BesselThomson
filter, g"$ . Note the shorter rise time and longer time delay of g"$ . The resulting
rise time is also better than that of the 9-pole analog only filter (see Fig. 7.2.8).
- 7.31 -
P.Stari, E.Margan
0.1
e
[ s]
e7
e13
0.2
0.3
10
f [MHz]
100
Fig. 7.2.14: Envelope delay comparison of the original 7-pole analog only
filter 7e( and the mixed mode 13-pole BesselThomson filter 7e"$ . The 7-pole
analog only filter is linear to a little over 2 MHz, while the 13-pole mixed
mode filter is linear well into the stop band, up to almost 5 MHz, more than
doubling the useful linear phase bandwidth.
The reader is encouraged to repeat all the above calculations also for the 5and 9-pole case to examine the dependence of the bandwidth improvement factor on
the analog filters slope.
As we have seen, the bandwidth improvement factor is very sensitive to the
steepness of the attenuation slope, so the 9-pole analog filter assisted by an 8-pole
equivalent digital filter may be found to be attractive now, extending the bandwidth
further.
7.2.4 Gain Optimization
We have said that we need a total gain of 100. Since the analog 7-pole filter
can be realized with a three-stage amplifier (one 3-pole stage and two 2-pole stages,
see Fig. 7.2.18), the gain of each stage can be optimized by taking the third root of the
total gain, "!!"$ %'%"' or "$$ dB (compare this to a two-stage 5-pole filter,
where each stage would need "!!"# "! or #! dB). The lower gain should allow us
to use opamps with lower 0T and still have a good phase margin to prevent pole
drifting from the optimum (because of the decreasing feedback factor at high
frequencies). This is important, since the required 12 bit precision is difficult to
achieve without local feedback, and the cost of fast precision amplifiers can be high.
As calculated before, for the sampling frequency of 20 MHz the bandwidths
are 1.660 MHz for the 7-pole analog only filter and 2.188 MHz for the 13-pole mixed
mode system. In addition to the 13.3 dB of gain, each amplifier stage will need at least
20 dB of feedback at these frequencies, to prevent the response peaking which would
result from the finite amplifier bandwidth (if it were too low). By accounting for the
amplifier gain rolloff of 20 dBdecade we conclude that we need amplifiers with a
unity gain bandwidth of at least 70 MHz for a 9-pole filter and 120 MHz for a 7-pole
filter. Note also that amplifiers with a unity gain bandwidth of 160 MHz would be
required for the two stages of the 5-pole filter with 20 dB of gain per stage and a
system cutoff frequency of only 1.160 MHz!
- 7.32 -
P.Stari, E.Margan
%
%
%
%
%
%
g7a=atdr(z7,p7a,t,'s');
h6d=atdr(z7,p6d,t,'n');
g13=conv(g7a,h6d);
g13=g13(1:max(size(t)));
%
%
%
%
The plot can be seen in Fig. 7.2.15. The dot markers indicate the first 15 time
samples of the analog filter step response, the digital filter impulse response and the
composite mixed mode filter step response.
1.2
1.0
0.8
g 7a
g 13
0.6
0.4
f 6d
11
0.2
0
0.2
0.1
0.2
0.3
t [ s]
0.4
0.5
0.6
0.7
Fig. 7.2.15 : Convolution as digital filtering for a unit step input. The 7-pole analog
filter step response ga( is sent to the 6-pole equivalent digital filter, having the
impulse response of 0d' . Their convolution results in the step response g"$ of the
effectively 13-pole mixed mode composite filter. The marker dots represent the
actual time quantization as would result from the ADC sampling at 20 MHz. The
impulse response of the digital filter is almost zero after only 11 samples, so the
digital filter needs only the first 11 samples as its coefficients for convolution. Note
that even if the value of the first coefficient is zero, it must nevertheless be used in
order to have the correct response.
- 7.33 -
P.Stari, E.Margan
- 7.34 -
P.Stari, E.Margan
10
1
analog filter
poles
zeros
digital filter
poles
1
1
10
Fig. 7.2.16: An example of a similar mixed mode filter, but with the analog
filter using two pairs of imaginary zeros, one pair at the sampling frequency
and the other pair at the Nyquist frequency.
10
fN
Attenuation [dB]
Fa7
F d6
10
fs
20
F13x
30
40
50
60
F a7z
70
80
90
100
10
f [MHz]
Fig. 7.2.17: By adding the zeros the analog filter frequency response in
modified from Ja( to Ja(z . Jd' is the digital filter response and J"$x is the
composite mixed mode response.
10
fN
Fd6
Attenuation [dB]
10
fs
Sr
20
30
Fa7z
40
50
60
Sb
70
80
90
Sa
Fd6 S b
100
10
f [MHz]
Fig. 7.2.18: The spectrum Wa is aliased into Wb . The difference between the ADC
resolution and Wb (both in dB) gives Wr , the envelope of which determines the
minimum attenuation required by Jd6 to suppress Wb below the ADC resolution.
- 7.35 -
P.Stari, E.Margan
Fig. 7.2.19 shows the time domain responses. Note the influence of zeros on
the analog filter response.
1.2
g a7
1.0
g a7z
0.8
0.6
f d6
0.4
g 13
0.2
0
0.2
0.1
0.2
0.3
t [ s]
0.4
0.5
0.6
0.7
Fig. 7.2.19: The step response of the 7-pole analog filter ga( is modified by the
4 zeros into ga(z . Because the alias spectrum is narrower the bandwidth can be
increased. The mixed mode system step response g"$ has a rise time of less than
150 ns (in contrast with the 180 ns for the case with no zeros).
- 7.36 -
P.Stari, E.Margan
strays. In the figures below are the schematic diagrams of a 3-pole and a 2-pole stage.
We shall use the 3+2+2 cascade to implement the 7-pole filter required.
R4
s
R3
R2
R1
C2
C3
C1
o
Fig. 7.2.20. The MultipleFeedback 3-pole (MFB-3) low pass filter configuration
R3
s
R2
R1
C2
C1
o
Fig. 7.2.21: The MultipleFeedback 2-pole (MFB-2) low pass filter configurations
=" =# =$
a= =" ba= =# ba= =$ b
=" =# =$
=$ =# a=" =# =$ b = a=" =# =" =$ =# =$ b =" =# =$
(7.2.2)
a=" =# =$ b
=" = # = " = $ = # = $
=" =# =$
"
V%
"
V$
V$
"
"
G$ V%
V$
G# V$
V#
V"
"
"
V$
V#
V"
V#
G# G$ V$ V%
G1 G 2 V 1 V 3
(7.2.3)
" aV$ V% b
V$
V%
"
"
V#
V$
G" G # G $ V " V $ V %
- 7.37 -
(7.2.4)
(7.2.5)
P.Stari, E.Margan
V#
V$ V %
(7.2.6)
By examining the coefficients and the gain we note that we can optimize the
component values by making the resistors V" , V$ , and V% equal:
V" V$ V% V
(7.2.7)
The coefficients and the gain equations now take the following form:
a=" =# =$ b
=" =# =" =$ =# =$
=" =# =$
#
"
V
#
G$ V
G# V
V#
(7.2.8)
"
#V
"
V
$
G# G$ V #
V#
G1 G2 V #
V#
(7.2.9)
#V
"
V#
G" G# G$ V$
E!
V#
#V
(7.2.10)
(7.2.11)
V
"
V#
#E!
(7.2.12)
#
#Q
G$ V
G# V
O! =" =# =$
$ #Q
Q
G# G$ V#
G1 G2 V#
#Q
G" G# G$ V$
(7.2.13)
(7.2.14)
(7.2.15)
Next we can normalize the resistance ratios and the VG time constants in
order to eliminate the unnecessary variables. After the equations are solved and the
component ratios are found we shall denormalize the component values to the actual
cut off frequency, as required by the poles. We can thus set the normalization factor:
"
V
(7.2.16)
(7.2.17)
- 7.38 -
P.Stari, E.Margan
Ga
Gb
G#
R
Gc
G$
R
(7.2.18)
#
#Q
Gc
Gb
(7.2.19)
O"
$ #Q
Q
Gb Gc
G a Gb
(7.2.20)
O!
#Q
Ga Gb Gc
(7.2.21)
#Q
O ! G b Gc
Ga
(7.2.22)
$ #Q
O! Gc
Gb Gc
#
(7.2.23)
Gb
(7.2.24)
#
Gc
#Q
#a$ #Q b
#O" Gc O! Gc#
(7.2.25)
Gc$ #
O" #
#a$ #Q bO#
%a$ #Q b
Gc
Gc
!
a# Q bO!
a# Q bO!
O!
(7.2.26)
: #
O"
O!
(7.2.27)
#a$ #Q bO#
a# Q bO!
(7.2.28)
%a$ #Q b
a# Q bO!
(7.2.29)
<
- 7.39 -
P.Stari, E.Margan
(7.2.30)
The real general solution for the third-order has been written already in Appendix 2.1:
Gc
# #
"
$ a# :$ * : ; #( <b
:
: $; sin arctan
$
$
$
* % < :$ :# ; # ") : ; < % ; $ #( <#
(7.2.31)
By inserting the poles =" , =# , and =$ into the expressions for O! , O" , and O$ ,
and the expression for gain in Q , and then using it all in :, ; , and <, we finally obtain
the value of Gc . The explicit expression would be too long to write here, and, anyway,
it is only a matter of simple substitution, which in a numerical algorithm would not be
necessary. With the value of Gc known we can go back to Eq. 7.2.24 to find the value
of Gb :
"
E!
Gb
#a=" =# =" =$ =# =$ bGc =" =# =$ Gc#
#$
(7.2.32)
"
"
E!
=" =# =$ Gb Gc
(7.2.33)
Now that the normalized capacitor values are known, the denormalization
process makes use of the circuits cut off frequency, =$h (which, to remind you, is
different from the 7-pole filter cut off =(h , as it is from the total system cut off, ="$h );
=$h relates to O! and the poles as:
O! =$$h =" =# =$
(7.2.34)
From =$h we can denormalize the values of V and the capacitors to acquire
some suitable values, such that the opamp of our choice can easily drive its own
feedback impedance and the input impedance of the following stage. For high system
bandwidth, it is advisable to choose V as low as possible, usually in the range
between "&! and (&! H. Let us say that we can set V ##! H This means that:
R
"
##!
(7.2.35)
(7.2.36)
G# R Gb
(7.2.37)
G$ R Gc
(7.2.38)
- 7.40 -
P.Stari, E.Margan
" $
"
"
=$h
#1
#1 V
E!
G" G# G$
(7.2.39)
(7.2.40)
By inserting the first three poles from p7a for =" , =# , and =$ , we obtain the
following component values:
% The poles of the 7th-order analog filter:
p7a:
1e+7 *
-3.0181
-2.8572
-2.8572
-2.3275
-2.3275
-1.1637
-1.1637
+
+
+
1.1751i
1.1751i
2.3850i
2.3850i
3.7353i
3.7353i
[rad/s]
% The first three poles of p7a are assigned to the MFB-3 circuit:
s1 = -3.0181 * 1e+7
s2 = ( -2.8572 - 1.1751i ) * 1e+7
s3 = ( -2.8572 + 1.1751i ) * 1e+7
[rad/s]
[rad/s]
[rad/s]
C2 = 274.5 pF
R2 = 2042 Ohm
C1 = 24.9 pF
@s
V3
V3
"
V2
V1 V 3 G 1 G 2
V3
V3
"
V3
"
=2 =
"
V1
V2
V3 G 2
V2
V1 V3 G1 G 2
(7.2.41)
- 7.41 -
P.Stari, E.Margan
=" = #
=" = #
E! #
a= =" ba= =# b
= = a=" =# b =" =#
(7.2.42)
V2
V3
(7.2.43)
and the component values are found from the denominator polynomial coefficients, in
which, in order to optimize the component values, we again make V" V$ V :
=" =#
a=" =# b
"
E0 V2 G1 G2
(7.2.44)
"
"
#
V G2
E0
(7.2.45)
G2
"
"
#
V a=" =# b
E0
(7.2.46)
G1
a=" =# b
"
=" = #
V a#E0 "b
(7.2.47)
G"
R
Gb
G#
R
so
that
(7.2.48)
Then:
Gb
"
"
#
a=" =# b
E0
(7.2.49)
Ga
a=" =# b
"
=" =#
#E0 "
(7.2.50)
Let us say that here, too, we use the same values for V and R , as before
(V ##! H, R "#!!; note however that in general we can use a different value if
for whatever reason we find it more suitable one such reason can be the preferred
values of capacitors). Thus:
G" R Ga
G# R Gb
(7.2.51)
0#h
=" =#
"
=#h
#1
#1
V E0 G1 G2
- 7.42 -
(7.2.52)
P.Stari, E.Margan
From p7a we use the 4th and the 5th pole for the first MFB-2 stage and the 6th
and 7 pole for the second MFB-2 stage. With those we obtain the following
component values:
th
% The
s1 =
s2 =
f2h =
[rad/s]
[rad/s]
[rad/s]
[rad/s]
We can now finally draw the complete circuit schematic diagram with
component values:
s
+1
C0
12pF
UGB
R0
1M
R14
R13
R12
220
220
2042
R11
C13
103pF
R23
R22
220
1021
R21
220
C22
216pF
C21
220
C12
275pF
R33
R32
220
1021
R31
220
18.5pF
C32
433pF
A2
C11
25pF
A1
C31
6.8pF
A3
Fig. 7.2.22: The realization of the 7-pole analog filter for the ADC discussed in the
aliasing suppression example. The input signal is separated from the filter stages by a
high impedance 1 MH, 12 pF unity gain buffer (UGB). The first amplifier stage A1
with a gain of 4.64 is combined with the third-order filter, the components are
calculated from the first three poles taken from the 7-pole analog part of the 13-pole
mixed-mode system. The following two second-order filter stages A2 and A3 have the
same gain as the first stage, whilst their components are calculated from the next two
complex conjugate pairs of poles from the same bunch of 7. All three amplifying stages
are inverting, so the final inversion must be done either at the signal display, the digital
filter routine, or simply by taking the 2s complement of the ADCs digital word.
- 7.43 -
to ADC
P.Stari, E.Margan
- 7.44 -
P.Stari, E.Margan
- 7.45 -
P.Stari, E.Margan
References:
[7.1]
[7.2]
[7.3]
[7.4]
[7.5]
- 7.47 -
Wideband Amplifiers
Index
Index
negative feedback, voltage,
5.95, 5.114
negative feedback, current, 5.6279
feedback (see error correction)
feedforward (see error correction)
frequency response, definition,
2.14, 6.78, 6.2126
0T Doubler, 3.75
Gilbert multiplier, 5.123
four-quadrant, 5.127
gain control, continuous, 5.125127
introduction, 3.7
improving linearity of, 120
JFET source follower, 3.79
circuit analysis, 3.7982
capacitances, inter-electrode, 3.80
envelope delay, 3.8485
frequency response, 3.8283
magnitude, 3.83
considering input generator
resistance, 3.93
with inductive generator
impedance, 3.90, 3.93
input admittance, 3.92
input impedance, 3.8990
input protection network of, 5.52
against long term overdrive,
5.2526
against static charges, 5.5253
negative input conductance,
normalized, plot of, 3.91
compensation of, 3.92
alternative compensation of, 3.94
overdrive recovery, 5.47
phase response, 3.84
tendency to oscillate, 3.90, 3.93
similarity with Colpitts oscillator,
3.90
step response, 3.8586
considering input generator
resistance, 3.87
MOSFET source follower, 5.48
maximum amplitude range, 4.65
Miller capacitance, Miller effect, 3.13
multistage, 4.9,
two stage, inductively peaked, 5.10
optimum number of stages for
minimum rise time, 4.1719
negative feedback (see error correction)
non-peaking, multistage, DC coupled, 4.9
decibels per octave, explanation of,
4.11
envelope delay, 4.1213
frequency response, 4.910
optimum single stage gain, 4.1718
optimum number of stages, 4.1719
Amplifiers
amplifier stages, basics, 3.9
cascode non peaking, 3.37
basic circuit analysis, 3.3738
damping of U# emitter, 3.3740
input impedance, basic, 3.49
compensation of U# , 3.4647
step response and preshoot, 3.40
thermal compensation of U" , 3.44
thermal distortion of step signal, 3.43
thermal stability,
bias optimization, 3.44
cascode differential, 3.7071, 5.108
improved Darlington, 5.109110
feedforward correction, 5.111
cascode emitter peaking, 3.49
circuit analysis, basic, 3.4952
Bode plot of, 3.53
complex poles of, 3.53
input impedance irregularity, 3.50
input impedance compensation,
3.5456
poles, placement of,
see complex poles
thermal problems, 3.4246
cascode folded, 3.68
Cascomp, 5.112114
common base, 3.17
base emitter effective
capacitance, 3.19
effective emitter resistance, 3.35
input impedance, 3.18
transimpedance, 3.34
input impedance, 3.34
Miller capacitance, 3.3334
common emitter, 3.9
circuit analysis, 3.915
emitter resistance, 3.12
voltage gain, calculation of, 3.1415
input impedance, 3.14
input pole, 3.15
Miller capacitance, 3.13
CRT driver 3.10, 5.24
differential, 3.69
circuit analysis of, 3.78
common mode operation, 3.70
differential mode operation, 3.70
two stages example of, 5.910
drift correction of, 3.69, 5.106107
simple, 5.43
active, 5.45
envelope delay/advance,
definition of, 2.2021
error correction, 5.94, 5.98, 5.104
feedforward, 5.96100, 5.111116
improved voltage feedback, 5.80
- X.1 -
Wideband Amplifiers
Index
- X.2 -
Wideband Amplifiers
Index
- X.3 -
Wideband Amplifiers
Index
- X.4 -
Wideband Amplifiers
Index
1.6162
Laurent series, 1.61
examples of, 1.63
Complex integration around many poles,
1.65, 1.69,
CauchyGoursat Theorem, 1.6566
Equality of Integrals
( J =e=> .= and (
-4_
-4_
- X.5 -
Wideband Amplifiers
Index
- X.6 -