Switch Mode Power Supplies a to Z 2nd --_ Add-On
Switch Mode Power Supplies a to Z 2nd --_ Add-On
Switch Mode Power Supplies a to Z 2nd --_ Add-On
by
Sanjaya Maniktala
and
Nicola Rosano
2
ISBN: 9798588905015
3
INDEX
Index ........................................................................................................................................................................ 4
Preface..................................................................................................................................................................... 7
Chapter 1 – Resonant Power: LC to WPT ....................................................................................................... 8
1.1 Introduction.................................................................................................................................... 8
1.2 Resonance Lessons back to Classical Power .......................................................................... 10
1.3 WPT is (or should have been) the LLC ..................................................................................... 11
1.4 Design Goals.................................................................................................................................. 11
1.5 The Control Loop ......................................................................................................................... 13
1.6 Resonant Circuits/High Efficiency ........................................................................................... 13
1.7 Resonant Circuits and Efficiency Nuances .............................................................................. 16
1.8 Control Loop or not ..................................................................................................................... 19
1.9 The First Harmonic Approximation ......................................................................................... 24
1.10 What is so special about the LLC? ......................................................................................... 25
1.11 The Elusive Resonant Peak .................................................................................................... 27
1.12 Heading to a Practical Design: Setting Gain Target ........................................................... 29
1.13 LLC and the Power of Magnetics ........................................................................................... 30
1.14 Defining the Kernel ................................................................................................................. 33
1.15 Practical Design Example of a wide-input 900W LLC/WPTconverter .......................... 35
1.16 Validating our design through simulations ........................................................................ 39
1.17 The PoE design Revisited and Improved ............................................................................ 42
1.18 Conclusion ................................................................................................................................. 46
1.19 Appendix 1 – MathCad Spreadsheet ..................................................................................... 47
1.20 Appendix 2 – Alternative LLC design strategy: the Graphical approach ....................... 51
1.21 Appendix 3 – Building the LLC discrete modulator .......................................................... 65
1.22 Appendix 4 – Building a LLC hybrid modulator ................................................................. 73
Chapter 2 – The Dual Active Bridge ............................................................................................................... 78
2.1 Introduction.................................................................................................................................. 78
2.2 Stable Conditions to Topologies ............................................................................................... 80
2.3 The DAB Schematic ...................................................................................................................... 82
2.4 Understanding Transformer action ......................................................................................... 84
2.5 The Role of the Leakage Inductance “Llkg” ............................................................................ 90
2.6 Voltage and Current segments in the DAB .............................................................................. 92
2.7 Discovering the Current Starting/Ending values ................................................................ 101
2.8 Input DC current component and Power Capability (Scaling Laws too) ........................ 103
2.9 Sample DAB design: 7kW (for PV applications) .................................................................. 105
4
2.10 More Design Curves and Validation ................................................................................... 106
2.11 What if: we had D2> D1?....................................................................................................... 108
2.12 The Two choices for Angle 1 for the same Power and Different Gain Targets .......... 114
2.13 What if: We run the DAB in synchronous mode? ............................................................. 116
2.14 Final Solved Example of Chapter 2 ..................................................................................... 116
Chapter 3 – Small Signal Modeling (Guest Chapter) ............................................................................... 120
3.1 Intro .............................................................................................................................................. 120
3.2 Voltage Mode Control PWM switch ........................................................................................ 123
3.3 PWM switch at work: Voltage Mode CCM assumed ............................................................. 126
3.4 Peak Current Mode PWM switch ............................................................................................ 137
3.5 PWM switch at work: Peak current mode CCM assumed ................................................... 142
3.6 Input Filter Design with PWM switch .................................................................................... 153
3.7 Appendix 1 – Modelling the peak current mode control modulator ............................... 156
3.8 Appendix 2 – GNU Octave / Matlab Voltage mode and Current mode Cscripts ............. 167
Chapter 4 – Analog Control Loop Theory ................................................................................................... 183
4.1 Introduction................................................................................................................................ 183
4.2 Air-conditioners to Closed-loop Switchers? ......................................................................... 185
4.3 The Unitrode Seminar of 1996 ................................................................................................ 188
4.4 “Control Problems your Mother Never Told You About" ................................................... 188
4.5 Applying Control Loops to Real-world Switchers ............................................................... 189
4.6 Open and Closed Loop Gains.................................................................................................... 194
4.7 Criterion of instability and “safety margin” ......................................................................... 198
4.8 DC Gain and Settling Error ....................................................................................................... 201
4.9 Voltage Positioning ................................................................................................................... 203
4.10 Input Ripple Rejection .......................................................................................................... 204
4.11 Gain of Entire Plant ............................................................................................................... 205
4.12 Input Line Voltage Feedforward ......................................................................................... 207
4.13 Input Correction in Current-mode Controlled Buck ....................................................... 207
4.14 Transfer Functions of Other Topologies ........................................................................... 208
4.15 Enter: The Laplace Transform ............................................................................................ 209
4.16 Understanding Delays........................................................................................................... 212
4.17 Large Signal Response in Other Topologies ..................................................................... 212
4.18 Pitfalls of Terminology ......................................................................................................... 213
4.19 Stepping the Reference Voltage .......................................................................................... 214
4.20 The K-Factor Method ............................................................................................................ 214
4.21 Practical Analog Compensation Strategy .......................................................................... 215
4.22 Typical analog compensation exercise.............................................................................. 216
5
4.23 Compensating Other Topologies ........................................................................................ 225
4.24 Summary of Plant Transfer Functions ............................................................................... 225
4.25 Measuring Loop Gain on the Bench .................................................................................... 228
4.26 Tweaking a Type 3 Compensator ....................................................................................... 229
4.27 Underlying Approximations of Type 3 .............................................................................. 234
1.23 Appendix 1 - Building Type 2, Type 3 compensation networks in Matchad .............. 235
Chapter 5 – PID and Digital Control Loops ................................................................................................ 239
5.1 Introduction................................................................................................................................ 239
5.2 Problems with CMC ................................................................................................................... 241
5.3 Contribution of Plant and Feedback Blocks ......................................................................... 244
5.4 LC Post Filter Analysis .............................................................................................................. 246
5.5 Developing Intuition ................................................................................................................. 250
5.6 Logarithms are Natural ............................................................................................................ 251
5.7 Fourier Series to Laplace Transform ..................................................................................... 254
5.8 Building Blocks: Poles and Zeros ............................................................................................ 261
5.9 Plotting some Transfer Functions that we may encounter (or may not!) ...................... 262
5.10 Summary of Common Functions ......................................................................................... 270
5.11 Analog to Digital ..................................................................................................................... 271
5.12 "Peakiness" of Transfer Functions ..................................................................................... 272
5.13 Limits of Analog Compensation .......................................................................................... 274
5.14 Unrecognized Effects of "Q-Mismatch" in Type 3 Analog Compensators ................... 276
5.15 Digging deeper into "Q-Mismatch" ..................................................................................... 277
5.16 Conditional Stability Re-examined..................................................................................... 277
5.17 "Forcing" an Analog Compensators to be More Effective .............................................. 280
5.18 What is Damping? .................................................................................................................. 282
5.19 Beyond Critical Damping...................................................................................................... 282
5.20 A Useful Hint: Impedance of a Capacitor ........................................................................... 283
5.21 Introduction to PID Coefficients ......................................................................................... 288
5.22 The Grand Analogy ................................................................................................................ 291
5.23 Finally: Q-mismatch Resolved ............................................................................................. 293
5.24 Bench Validation .................................................................................................................... 294
5.25 Alternative Ways to write PID coefficients ....................................................................... 294
5.26 Conclusion ............................................................................................................................... 296
5.27 Appendix 1 – MathCad spreadsheet ................................................................................... 297
Collecting Typos on A-Z 2nd ............................................................................................................................ 298
6
PREFACE
These chapters were to have been the additional pages of a previous book, to make it the third
edition. Unfortunately, I am sad to point out, the short-sighted publisher handed me a young
arrogant (IMHO) program manager from UK, who really spoiled my mood from Day 1! I felt I
knew the process well, having written three (very well-known) books through them… I was not
in the mood of "proving" my writing capability to him, by providing "sample chapters" etc, by
designated "Date X", "Date Y"… etc.… In fact, I already had almost all the pages presented herein
written out by the first date in my contract, but the guy was waiting only for his "sample
chapter", to first kindly "approve" my writing (hopefully not technical) skills, before giving me
even basic dropbox privileges to upload the very same pages presented here… Yes, they never
got to see the material as a result, but you can. Yes, I walked out of the contract. Their loss.
In my personal life, things seem to have become very difficult/busy for several reasons, so I
needed to get this out to the world quickly… in particular to acknowledge Nicola Rosano… the
guy who caught a string of mistakes in A-Z Second Edition (Note: I accurately call them
mistakes, not "typos"). He also did a great job in popularizing my scaling laws, first publicly
revealed by me in an IEEE PELS SFBAC seminar. But that was in pre COVID-19 times… Nicola
had to do a (very well-attended) webinar instead. The type of guy he is, he always goes out of
his way to acknowledge me. He is super at that! I do that too always (e.g. my constant
acknowledgements of my mentor Dr GT Murthy in India). Yes, Nicola talked about my discovery
of current ripple ratio, and my discovery of "hidden-in-plain-sight" scaling laws. He also kindly
provided a superb guest chapter here as well as highly technical insights spread all over the
chapters.
In Chapter 5 you will find another hitherto unnoticed discovery of mine, dating back to 2015. I
call it the "Q-mismatch" issue. See how I solved it and overnight doubled the efficacy of a
vendor's vaunted programmable "PID" controllers. It was an issue apparently overlooked by all
experts I still feel, and my solution was just a few lines of simple calculations. With spectacular
improvement in loop response right off the bat.
Also, Eric Wen, a very capable English to Chinese translator of some of my books, is hereby free
to translate this as he wishes, perhaps with China Machine Press… I welcome that too. He was
a live translator accompanying me for the whirlwind 5-city Sanjaya2016 tour of China. Always
gave great inputs too. As did Sesha Panguluri, a dear friend/colleague of mine over the years,
who I really enjoy thinking along with, and even relearning power all the time…. He is a master
at firmware, but very adept at understanding and discussing power too. Hence his contribution
to my basic way of thinking too, that I must acknowledge here finally.
With that, all I will say is enjoy the book, hope you remember me for this too. And thanks for
your overwhelming support of my previous books. Over the years. My reward was: you liked it!
That is all I ever cared for.
Sanjaya Maniktala
Dec 31, 2020
7
CHAPTER 1 – RESONANT POWER: LC TO WPT
1.1 Introduction
We will start with a rather abstract discussion before we delve into finer details. Because
resonant power is tricky—to state it mildly! The devil is not in the details! Not initially at least.
One can easily mistake the forest for the trees, getting utterly lost and not even realize it.
We recommend a soft transition from the relative comfort of our familiar world of classical
(“PWM”-based) power, into the yet-non intuitive world of resonant power, including wireless
power transfer (WPT). Preferably with a stopover at the “LLC topology”, for reasons which will
become clear. The ultimate aim should be to cultivate and “fine-tune” a completely new type of
reasoning—to deal with something, well…completely new. At some point, we will realize that
our prized “prior “experience”, is now merely excess baggage. We need to disband it quickly. To
consciously “unlearn” much of what undoubtedly served us well over the years. Because first
and foremost, we need to know, that we really don’t!
For example: “Lower the frequency to reduce switching losses and improve efficiency”. Or
“Minimize losses by reducing the leakage inductance”. Sounds convincing! Except that neither of
these statements are valid when it comes to resonant topologies, implemented “correctly” of
course. In the strange new world ahead, there are almost no switching losses. And the efficiency
actually worsens if we lower the frequency! Why? Because, the conduction losses, increase when
the frequency is lowered, not stay fixed, as we’ve always tended to assume—a sort of inverse
switching loss. And energy residing in the “notorious” leakage inductance too, is almost fully
recoverable. In fact, it facilitates further improvement in efficiency, by crowbarring out
electrostatic stored energy buried in say, EMI-suppression capacitors placed on switching
nodes, and cycles them back to the input, for convenient reuse! And no energy whatsoever is
lost “mysteriously” between the coils either. It is all accounted for. Most surprisingly perhaps, it
can also be shown that low coupling, i.e. high leakage inductance, actually enables greater
power, thrown over larger distances in wireless power transfer (“WPT”), than tightly coupled
coils. Because the rectified DC voltage on the Secondary (receiver) side, tends to reflect via basic
“transformer action” back onto the Primary (transmitter) side, clamping the voltage across a
certain part of its coil, thereby dictating the dI/dt in it, indirectly limiting the maximum power
that can be drawn by the system. So, less coupling, less clamping, more dI/dt, more power. This
is a world we were totally unprepared for perhaps…!
All the above advantages however accrue only within certain definable limits. But not
necessarily, “hard” limits that we are familiar or comfortable with. Nature is bounded in its own
responses—to keep itself in control too! Hot air rises, to allow cold air to rush in, to cool the hot
spot. Summer gives way to winter, which eventually hands us over, back once again to summer,
and so on. That is nature! Self-sustaining. We need to respect its processes, and boundaries. And
if we want to go further and actually exploit the awesome power that resonance in particular,
brings to our doorstep, quite literally, we must also realize it comes to us with a hefty price tag
attached: it demands total design expertise. Certainly, not a convincing set of alibis.
8
Such as, when things didn’t go as planned, instead of questioning our basic assumptions, we
collectively shrugged it off as “inherently poor user experience of inductive (wireless) power
systems”. What if it was merely poor designer experience? Undeterred, we went on to declare:
“resonant systems on the other hand, have inherently good user experience”. Resonant being
defined as “receiver and transmitter both tuned to exactly the same frequency”. Did we forget
that even in our “inductive systems” we were doing exactly the same thing already? Or at least
intended to do so! Actually, in either case, that “double tuning” was always an erroneous
concept to start with, as we will explain. Based on some radio-frequency (RF) gut-instinct of
ours from a previous life. Completely inapplicable to coupled systems such as those we are
dealing with here.
It got worse: because even accepting the “resonant-inductive” dichotomy for argument’s sake,
one might wonder what else was even left to “resonate” further with, other than the receiver
(Rx) and transmitter (Tx) of course, for us to announce to the world an upcoming, “resonant
extension to the (inductive) Qi standard”? Wasn’t that to have been AirPower, by the way? What
went so “unexpectedly” wrong?
Two wrongs never made a right, certainly not three. But what really seems to have happened
historically, is that engineers working in classical power conversion, or RF electronics, maybe
just Bluetooth, headed into a brand new, and very tricky, area called WPT, rather
unsuspectingly. And much too directly: they did not have the luxury of that fortunate chance
encounter with the big brother of WPT: the LLC topology, which this author had. Otherwise, they
too might have realized somewhere along the way, that if the relatively better-understood LLC
converter today, were to be designed the same way as WPT systems of today are, there would
not be a single satisfactorily working LLC converter left on the planet either.
Having set our minds to really understanding resonance, preferably via the LLC converter, we
should be forewarned that it can still all seem very intimidating at first. There is hope though:
things can get better quickly if we follow the process of correlating our developing “mental
picture” to actual simulations, based on detailed math spreadsheets, then building something
however small, and finally doing sanity checks, using lab data—a process of triangulation which
this author recommends, as the one he has always followed: MATHCAD →SIMPLIS →LAB
→SIMPLIS →MATHCAD→SIMPLIS →LAB, and so on. Not for the faint-at-heart. Certainly not for
the gung-ho.
Maybe, we will finally see a pattern emerge too! The big picture! Because acquired mastery, if
any, is ultimately best measured, by how effortlessly we can communicate it back, even to the
relative novice on occasion. Simplifying, what is perhaps benumbing complexity!
That may become evident in the powerful, hitherto “hidden-in-plain-sight” power and frequency
scaling laws that we will reveal here. Some of those were first discovered and published in
Chapter 19 of the book, Switching Power Supply Design and Optimization, 2 nd edition, by
McGraw-Hill, in 2014. In this chapter we will go further: we will add to those laws, and then
exploit them unabashedly—to unveil an astonishingly simple, barely half-page design process.
But one that is as thorough as can be. And it can be used right away for designing any LLC-type
resonant converter or WPT system: any power, any frequency, any output voltage, any input
voltage, in fact tolerant to input variations too. Oh, we forgot to mention: any coupling too!
9
A varying input voltage, incidentally, was supposed to be the bane, if not death-knell, of
LLC/resonant converters. Not so anymore, we will be pleased to discover. No longer do LLC
converters have to remain fearfully confined to that familiar, comfortable location just after a
power factor correction (PFC) stage. From now on, LLC can be front-end, (and center-stage
too!). And we will also be able to, design WPT systems—correctly for a change.
As a sneak peek, here are the scaling patterns that we will be using (bullet “c” below, being the
brand-new member of our scaling family):
a) To double the power, halve the inductance and double the capacitance
b) To double the frequency, halve the inductance and halve the capacitance
c) To quadruple the power, double the input voltage
Instead of “double” or “halve”, we can use other scaling factors to generalize quite obviously.
The trend is obvious.
So, basically, in a few simple steps, none even involving the non-intuitive j=√(-1) for a change (!),
we can scale an almost randomly chosen, but previously carefully-studied “kernel”, to virtually any
power level and frequency range we want! That is the true power of scaling, in movies, called
“cutting to the chase”! And though it looks too good to be true at times, it has a happy ending: it
is as accurate as can be. As can be confirmed very easily by an actual build.
It is important to keep in mind that if we change the power, we do not change the frequencies
involved, since those depend on the product of L and C, and we divided L by the power scaling
factor and also multiplied C by the same factor, so the LC product remains unchanged. Similarly
when we scale frequency, we divide both L and C by the same frequency scaling factor, keeping the
ratio C/L unchanged, and that is what power depends on. So the real beauty of the first two scaling
laws is that they are “orthogonal”. They don’t interfere with each other. That is why this discovery
is so fundamental and powerful.
And, though we usually recommend, not to believe anything that you may have read in
Switching Power Supplies A-Z, Second Edition for example, resonance being a new frontier, the
above scaling laws apply equally well to classical power.
Take a Buck, working well at 5V, 1A @ 100kHz. Keeping everything else the same, if your Boss
asks you to double the power overnight, to 5V, 2A, you need to simply double the output
capacitor (to keep the same output ripple, as that is proportional to load current, but inversely
proportional to capacitance), and halve the inductance. Of course, though you halve the
inductance, its size doubles, since size depends on ½ LI2, and current has doubled too. Similarly,
the size of a capacitor depends on ½ CV2. Here, voltages were kept constant, so size of the
capacitor will double. Size scales linearly with power, as we intuitively expected! The 5V, 2A
converter will not be unstable either, because the crossover response depends on the LC break
frequency (plant), which has not changed. Note also, that the current ripple ratio, r, remains the
same, since ΔI has doubled in the inductor as we doubled I, since we halved L. It all fits well! We
have the same current ripple ratio so the design is still optimum.
10
Similarly, if your Boss now demands you take the 5V, 2A @100kHz converter overnight to
200kHz, all you need to do is halve both the inductance (and size) of the L and C. In doing so,
the LC break frequency which depends on the square root of LC, will also double, exactly how
we want it to behave as a function of switching frequency. So, we will likely still be stable, but
may need to scale the capacitances involved in the compensation circuitry. Indeed, we can just
halve all of them, and things should be fine. Current ripple ratio has also remained the same
here.
Any perceived, or “physical”, difference, between an LLC converter and a WPT system, is not
what may seem “obvious” to our naked eye—such as the distance we are throwing, or
transmitting, energy across, or in the shape of the windings/coils used. “Coil” or “winding”,
whatever we may call it, irrespective of how it looks, just becomes an “L” in our final electrical
schematic. Similarly, distance or separation, whether mm or a few meters, just ends up as a
different coupling coefficient (“K”) in our math spreadsheet. LLC and WPT are really not much
different, electrically.
The biggest difference between the two is actually the scariest one! The one you “see” but don’t
observe. Or “know” but don’t comprehend. In an LLC transformer, the windings are fixed
relative to each other (i.e. a constant coupling), but in a WPT system, the coupling coefficient
(“K”), can change dramatically with varying placement, or different alignments, of the receiver
(Rx) and transmitter (Tx) coils with respect to each other. The constantly varying K changes
everything. The overall result is hard to predict! But it can be. In fact, it has not only been fully
predicted, but solved and harnessed too. Those are however proprietary solutions which
cannot be revealed. But a lot of hints will emerge as we slowly unravel the grand resonant
puzzle, using the simple, graphical aids presented herein.
The graphical procedure we present, throws light on what was considered to be the most
baffling piece of the resonant puzzle: what is the correct, or most optimal L-C component
selection—to handle a certain power, over a desired input range? You would think that others in
the field of resonant power would be very happy to know what those values are. Not really,
because they likely don’t even realize this is in fact the biggest challenge in resonant power.
Because in classical power, “power capability” of the inductor was not the basic, overwhelming
concern ever, for a designer. A simple buck regulator for example, has the simple “DC transfer
function” VO = D × VIN. It is implicitly load independent, at least in the first order. We can draw
any load current theoretically! It is essentially “current unlimited”! And we could use almost
any inductance too (on paper), and that simple equation was essentially still valid. No obvious
“power capability” limits. If any, they were determined by parasitics. Indeed, in practice, there
is a power limit, but mainly determined by second-order effects. Such as those arising from
parasitics, like DC resistance of the inductor (“DCR”), current capability of the switch, its set
“safe” current limit, the current at which the inductor may saturate, or get too hot, and so on.
But all that changes dramatically in resonant power, and you may never even realize it. The
current Qi standard has approved a wide range of L’s and C’s, without a thought. All for the same
target power level Yes, they “work”, sort of, but were the component selections optimal?
11
A hint here: every LC resonant curve has a certain inherent “bulge” or “peaking” in relationship
to the R present. So, you might think you have a certain gain available, but the moment you
change R, and try to draw more power, the shape suddenly shapes. Try walking across a carpet
that sags as you walk over it, or even is moved sideways. We will see that the gain curve we
thought we had, not only “flattens”, essentially preventing us from drawing more and more
power, but also shifts sideways, quite dramatically in some cases, not only making our targeted
power possibly out of reach, but throwing any proposed control loop in doubt too….we may
have thought that to increase power we need to lower the frequency, but now since the peak
may have shifted to the “other side”, now we may have to reverse our entire “direction of
correction”, or “DoC”….without realizing it, leave aside knowing how to do it. Basically,
resonance and the awesome power it brings to us, is inherently self-limiting. That is what makes
all the powerful forces of nature, stay bounded and self-sustaining.
A hint to our strategy to handle input droops too: to handle a definable droop in the input
voltage, we need to overdesign the peak power capability at the maximum input voltage point!
A bit more of a “bulge” But only a certain definable amount—to avoid straying into full-blown
overdesign territory, which will impact cost and efficiency. Careful overdesign is our goal here.
The graphical aids not only point to the correct selection of the L-C components, but the best
operating points too: to do a meaningful “worst-case” system-level simulation—for computing
RMS currents, and so on. Because one of the biggest, unsaid problems facing all the simulation-
fans out there, is that in resonant power, it is sadly no longer as simple as:—do the “min-load”,
“max-load”, “min-Vin” and “max-Vin” corners! That was just child’s play! In resonance, we are
at the doorstep of an almost infinite array of possibilities to investigate. Can’t be done without
infinite simulation time! So, we desperately need to know precisely, beforehand, the best L-C
values to use, and even the target operating frequency to set! At least somewhere very close to
it—in the ball-park. By providing those in this chapter, we will finally be enabling meaningful,
quick simulations too.
Simulations are in fact strongly recommended in resonant power for another reason. Because
as we will see, all closed form equations in literature, base themselves on something called the
“first harmonic approximation” (“FHA”). You may ask: how approximate? Hmmm, that depends!
So, it is not an exact science, is it? Well, it is tricky, as we had warned! In this chapter therefore,
we will focus not on creating accurate or impressive “equations”, but more on unearthing the
trends and pitfalls of resonance, thereby homing in on the best points for conducting actual
simulations, so as to provide accurate reliable numerical estimates eventually, not
approximations based on FHA. That should eventually lead to a more reliable product, designed
in a predictable time-frame!
But a warning here: a lot of engineers likely did not understand WPT because they
inadvertently used a certain default transformer model that is provided in most electrical
simulator packages—which actually does not apply! It is based on certain assumptions which,
in effect, do not correctly model either the LLC or WPT magnetic structures. So, we need to
create our own simplified (transformer) model. The correct/best models can become really
complex, and we’ll reserve that discussion for a different day!
12
1.5 The Control Loop
The sloshing occurs only at a certain natural frequency though—something that is not so obvious
from the figure perhaps! But look more closely: if we want the two opposite current
components (the “180 degree apart” components) in the L and C to also be equal in magnitude,
for complete matching, in effect we want their impedances (2πf ×L) and 1/(2πf ×C) to be equal.
And that implies the applied frequency be exactly 1/(2π√(LC)), called the “resonant frequency”,
to ensure a complete “match”. A perfect couple! You can now leave them alone… A parallel LC
tank it is!
When that exact matching occurs, say by sweeping the applied frequency of the source, the
currents in the L and C will be not only be opposite in direction, but numerically equal too, and
will thus be continuous (to each other), self-contained and self-sustaining. In this condition, no
net current will theoretically ever need to be drawn from the input source, at least eventually.
The voltage on the LC combination can, and will, rise to any voltage level equal to that of the
source, including up to infinity if the source is up to it! But finally, the net current drawn from
the source will always (eventually) be zero, provided the source is exactly at the specific
“resonant frequency” above (otherwise not)! And finally, since no current is drawn from the
source anymore, even if we remove it altogether, the resonant tank can go on forever, sloshing
energy back and forth between the L and the C (at its natural resonant frequency). As a
corollary: the only reason the voltage across the tank (and the current through it) decay, is due
to resistances: ESR, DCR, and any “load resistor” that we may connect across it.
Similarly, a series LC combination presents zero impedance, which is why, to avoid infinite
currents, we prefer to explain it using a current source. Eventually, the net voltage across the
combined series-LC will drop to zero (equal to the final voltage across the AC current source).
The currents in the L and C have been forced to be the same as they are in series, but their
voltage components are now opposite. In fact, those can be made exactly equal in magnitude
too, but again, only at the specific applied frequency of 1/(2π√(LC)). We thus have a series-LC
tank!
Hypothetically, we can suddenly switch out (i.e. bypass), the current source, leaving the series
LC components connected in series. Things will carry on as before! Thinking harder, we realize
we have arrived at exactly the same configuration as the parallel LC tank if its voltage source
were suddenly removed! Both are merely a simple loop consisting of L and C now, with
circulating currents accompanied by pulsating/alternating voltages, with energy constantly
sloshing between the L and the C, and with no dissipation ever. The only difference being that
in one case we used a voltage source initially, to set the remnant voltage amplitude across each
component of the now standalone LC tank, allowing the current to be what it is (based on
impedances), and in the other case we used a current source to set the remnant current
amplitude flowing in each of the L and C, but allowing the voltage across each to be what it is.
In both cases however, we can think of it in the following way too: similar to the topmost
Mathcad-based plots in Figure 1.1, the inductor “returns” energy to the capacitor (which is now
behaving as the applied AC voltage source!).
15
A bit later, the capacitor “returns” its energy back to the inductor (which is now behaving as the
applied AC current source). Notice how it is all so complementary and elegant!
Note also that in the process, we can get huge “amplification” too: for example, if we had
connected the series LC tank with a voltage source in series instead of a current source, we
would get (close to) infinite currents. Similarly, a current source applied to the parallel LC
would have forced it to infinite voltage.
Intuitively, in a series-LC circuit at resonance for example, the L and C impedances are equal
and opposite, so it behaves quite like a “dead short” at that specific “resonant” frequency. This
resonance action is basically very helpful to be able to create fields extending over greater
distances, to instigate Faraday’s Law, which is what we try to do all the time in WPT. That was
exactly how Nikola Tesla was also trying to do it. He created very high voltages (millions of
volts), instigated by a huge dV/dt spike, produced by interrupting very high current flowing in
a huge inductor (V = LdI/dt of course).
But note how nature tends to self-stabilize: the more energy you try to draw from the resonant
tank, the less effective it becomes, because it “knows” that now you have introduced (load)
resistance, and the resonant voltage/current starts collapsing. Like a carpet pulled from under
your very feet as you try to walk across. But it is not that bad: without resonance, energy
wouldn’t have even made it over that distance to start with! That is why it is so important to
characterize the actual resonant response. How it changes!
A big part of that natural response, in effect the inherent “power capability” of an L-C
combination, is determined by the ratio of the resistance (i.e. the applied load mainly) and the
ratio √(C/L). This simple fact actually leads to one of the scaling laws which will be discussed
in more detail later.
It is clear: any inductance present in a circuit, discrete or “leakage”, is nevertheless still just an
“L” in the circuit, and it will attempt to return all its energy back to the source. It can’t ever
dissipate! Not within itself for sure. Maybe in external resistances! If present. And further, in
combination with a nearby C, it can also form either a deliberate, or an inadvertent, tank circuit,
constantly exchanging energy between the two. Because neither can dissipate! This is
essentially why resonant circuits are inherently so efficient to start with.
Generally, if we focus on just reducing ESR, DCR, RDS etc., we will get very high efficiencies. Quite
automatically! Yes, instead of AC sources, when we introduce switches connected to DC voltage
sources to push energy in at a certain rate, maybe with the intention of controlling the output
voltage by choosing the switching frequency, either to be close to, or far away from the natural
“resonant frequency”, we do have to consider the possibility of switching losses. Because we will
be turning our switches ON and OFF quite fast, simply to minimize crossover dissipation inside
them! But how “fast” is good enough? Resonance is actually very forgiving in this regard too.
Because it can be shown, as an example, that if we are using the popular half-bridge or full-
bridge topologies, we can actually use the leakage inductance present, to coax “soft-switching”,
or “zero-voltage switching” (ZVS), thus reducing switching losses dramatically. Leaving us with
only conduction losses, such as those from the DCR of the windings, or coils, to contend with,
and minimize.
16
No need typically, for “state of the art” wide band gap (WBG) switches such as Gallium Nitride
(GaN) or Silicon Carbide (SiC). Cheap, low-voltage Mosfets, albeit with sloppy switching
characteristics, but with low RDS to reduce conduction losses, is the way to go in most resonant
converter applications.
Keep in mind though, as another indication of the self-stabilizing, self-limiting advantages of
natural resonance, it can be shown that as we lower the coupling, though that enables greater
power delivery, it also comes at an increasing cost! The dominant circulating/excitation current
component (the unused energy returning back to the source every cycle), can become rather
high, leading to higher conduction losses, thus necessitating closer and closer attention to heat
dissipation. And to thermal management. Eventually however, the dissipation term mentioned
above is the specific one which ultimately impacts, and dictates, the efficiency of any resonant
converter, provided of course, ZVS is being ensured.
The correct region to head to, for enabling ZVS, is indicated in Figure 1.2 using a representative
resonant curve. It can be shown that ZVS is possible only if you switch to the right of a
(presumably single) resonant peak...on the left side of that peak, you get hard-switching, which
is essentially lossy and gives out a lot of EMI too. Basically, to the right of any resonant peak,
the LC network appears “inductive” (on the left it appears “capacitive”). We do remember that
inductors “complain” whenever we try to suddenly interrupt current flowing in them—the very
reason we need a catch diode in “square”/PWM topologies, as was explained in Chapter 1 of
Switching Power Supplies, A-Z, Second Edition, published by Elsevier in 2012. As a result,
inductances tend to maintain/force current continuity, searching for whatever path may be
available to continue to push current through. So, in a typical half-bridge (totem pole), if we
switch OFF the lower switch (Mosfet), the current in the “inductive” LC network being switched,
will tend to freewheel—through the body diode of the upper switch, the only available path! We
then just need to leave a deadtime of typically 100-200 ns before turning the top Mosfet ON.
Because during that tiny deadtime duration, the voltage across the upper Mosfet will be forced
to near-zero value, by the “inductive” network: its body diode will be forced into conduction to
maintain current continuity. And so, when we finally turn the upper Mosfet ON (after the
deadtime elapses), we get lossless, or zero-voltage switching (i.e. with no overlapping V-I
crossover, since V was almost zero during the entire switch crossover).
Surprisingly, none of the WPT practitioners out there seem to have understood this aspect
either. Especially those who try to control the output very simplistically: by trying to vary duty
cycle, or phase or even the input DC voltage applied to the transmitter, keeping frequency fixed.
All these were inspired by classical PWM power conversion obviously, where fixed-frequency
techniques are preferred for reducing EMI in particular. They didn’t seem to realize that with
the typical natural frequency shift of the resonant peak to a higher level when you load it, as we
will explain, they could easily land up, without doing a thing, on the wrong (left) side of the
peak—where ZVS is no longer possible. Now we will get high losses and much higher EMI too!
Just the opposite of what we were intending.
Energy is also not “automatically lost” between the receiver and transmitter coils, as some think.
There is no “resistor” present in that space, for dissipation to occur! Keep in mind that no part
of any energy balance-sheet is ever “un-accountable”, especially not when we are in the near-
field region, as explained further below. Energy is lost only in resistances (as heat), including in
the DCR of the coils themselves, but not in the airspace between!
17
Unless of course there is a metal object present therein (the “foreign object detection (FOD)”
issue), which generates eddy currents and thus heat. But that too is still, in effect, just a
resistance, an identifiable/quantifiable one, in the overall effective electrical schematic. It can
all be accounted for. There is no black magic, nor black holes involved here!
Figure 1.2: PWM versus Resonant Control Loops and Potential Issues
It is however always helpful to mentally distinguish between fields and waves! Waves indeed,
we may not be able to keep track of fully! They tend to disappear into outer space. Efficiency
truly becomes questionable. But for fields, those restricted to the “near field” region, it is just
all stored energy! Fully deliverable, with the rest being fully recoverable! None of it is ever
inherently doomed or “missing”.
18
And luckily, this near-field region extends up to 480 meters for our typical 100kHz operation
(it is precisely 0.16 times the wavelength, which is 3 kilometers in our case)! So, we can
certainly disregard electromagnetic waves, which will appear only when, hypothetically, the
100kHz fields we generate are strong enough to reach into the far-field region, i.e. half a
kilometer away! Which they clearly don’t! Why? Because Nikola Tesla is not around anymore!
Yes, he had tried all that! And nowadays, instead, we have UL (Underwriters Laboratories)
and FCC (Federal Communications Commission) watching over us carefully! Quite a sobering
thought!
Things will change dramatically only when we take our frequencies up to RF-levels. Indeed, RF-
based WPT methods do have intrinsically low efficiencies, of around 5-10 %, because at their
typical 2.4GHz, the near field region ends at 15mm! Less than a third of a finger away! As a
result, a huge fraction of the energy those RF-technologies create, do leave forever as
electromagnetic waves headed for distant planets. But not so in any of the near-field, low-
frequency category of resonant topologies under discussion here. The underlying physics of
resonance is shared by all the LLC/WPT methods that we are discussing here.
These clarifications, combined with the basic properties of inductors and capacitors, imply that
on paper at least, any resonant system can approach the ideal of 100%. Provided it is correctly
designed, as was hinted at in our ZVS discussion too. Usually, we just stop pushing efficiency at
the point where it stops making economic sense. A product needs to be cost-effective after all!
The ostensibly “fundamental” question of some: “What exactly is your efficiency”, has a simple
answer: “How much do you want?” Actual implementations of near-field WPT should certainly
be questioned. To know for example, whether they are exploiting ZVS or not. That will affect
efficiency. And distinguish technologies. But nothing arcane though.
As indicated, resonance is powerful, but inherently self-limiting too! Which is why it is very
tricky to design in. The peak bulges, but then collapses as mentioned! Nothing obvious we can
carry over from classical power for sure. That was all relatively “single-direction” and
predictable.
We realize that in a simple buck regulator we didn’t even need a control loop per se. For
example, if we had 15V input, and we wanted 5V out, we just had to drive the buck converter at
one-third duty cycle, irrespective of load current, and we would always get 5V (though indeed,
the load must be reasonably high to enforce continuous conduction mode “CCM”). At a later
design stage, when we finally introduced the (AC) control loop, its purpose was only to impart
much higher precision (regulation) to the desired/set output voltage.
The “direction of correction” (“DoC”) was unconditionally obvious too, as in all of classical
power: for all three fundamental topologies, you just need to increase D, and in response, V O (or
the DC “gain” which is just VO/VIN), would definitely increase. No mystery there. See Figure 1.2.
But coming to resonant circuits where traditionally, “frequency modulation” is used to control
the gain and thus the output, we should actually not be making any of the three implicit
assumptions made via the familiar “control algorithm” reproduced in Figure 1.2, from Fulton
Innovation (an Amway subsidiary, the guys who actually created the first versions of Qi).
19
a) The height of the resonant curve is not fixed as it seems to be, from their diagram: it is
in reality, a strong function of load. It will inevitably start flattening out if you try to
increase the gain to deliver more power to the load (by lowering the frequency). That
flattening, indicative of the self-stabilizing properties of resonance, was clearly not
considered, or anticipated, in the Fulton/Qi curve. But the mistake is not fatal! It could
still be considered implicit therein and in fact, even if that happened, it would likely be
handled quite well by any simple control loop, with a fixed DoC, as indicated by the load
transition cases shown in Figure 1.3 (a to c). But in fact, that was not even close to the
actual shape of the resonant curve they had on hand, as we will reveal!
b) The resonant curve is also not fixed relative to frequency, as the curve from Fulton
implies. It can move sideways. To the right for higher loads. And not just slightly, but
dramatically, as we will learn. Precisely 41% for very high loading and a coupling of 0.5,
as an example! It is a complicated function of both load and coupling in general.
Note: This sideways shift of the peak is not even obvious in simulations if we use the
built-in default “transformer” available in most commercial simulator packages.
Now, with the lateral shift of the resonant peak, strange new things will happen! Qi for
example assumes the peak is fixed and thus lays out a lower limit of 115kHz (to avoid
rolling over it!). But what if the peak was actually at 130kHz? Now, if we headed towards
115kHz, we could also easily “roll over” the top of the (flattened), shifted peak, especially
if our C/L ratio was inadequate, in relationship to the R (load resistor), i.e. insufficient
power available from the resonant circuit to meet our basic power requirement.
There is in fact something almost intangible, called the “power capability of a resonant
circuit”, which depends on C/L as previously mentioned. Unknowingly, Qi “approved”
many transmitter coils with widely-varying, almost random, C/L ratios, all supposedly
for the same 5W/15W power level (and often for the same V IN too). The overall power
capability of the selected reactive components was all over the place! They were
probably thinking “Buck” when they wrote their standard, which as we indicated has a
transfer function independent of power (in the first order). But resonance is very
different.
Worse, by the flattening and roll-over, you would then land up in the region to the left of
the peak, where your entire DoC needs now to be reversed: here you need to increase
frequency to deliver more power, not decrease it! But if you didn’t even know that you
had rolled over (and how could you?), the power would collapse increasingly, as you
lowered the frequency further and further, eventually leading to a “mysterious” turn-
off—due to power inadequacy, control loop confusion, or just too low a voltage on the
receiver for it to even be able to communicate back its requirements. See Figure 1.3d
for a similar example of what can happen!
20
Figure 1.3: Frequency modulation will get stuck if the peak shifts
c) Finally, why assume we have just one resonant peak? There were two capacitors
remember? One from the Primary (transmitter), and another one expectedly reflected
from the Secondary (receiver). Both would end up interacting with the existing L. So,
two second order poles. Two resonant peaks—basic physics! But looking back, all those
engaged in WPT today, seem to have based their efforts on a fundamentally flawed
“double resonator” assumption, which unbeknownst to them, gave not one fixed peak,
as they had assumed, not one moving peak (which they could have conceivably handled,
say by exotic or proprietary control algorithms, had they known), but two wildly moving
peaks, not even amenable to a simple DoC, or even a very clever control strategy .And
that is precisely how the world possibly ended up with “inherently poor user experience
of inductive systems”. This is the critical flaw encircled in Figure 1.4b too. The entire
WPT world subscribes to it! They just (still) don’t know, that they don’t!
21
Figure 1.4: LLC to WPT, by avoiding a current prevalent mistake in the latter
23
The double-resonator flaw (i.e. the presence of CS in effect), is shown encircled in the equivalent
circuit diagrams presented in Figure 1.5 too. But that figure has some new details, which will
be helpful in our upcoming design calculations/procedure.
Figure 1.5 shows us how to go back and forth between equivalent AC and DC models, including
reflecting impedances over to either side of the transformer, for enabling simple calculations.
That is all based on keeping in mind that even if we apply a “square-ish” voltage shape to a
general LC network, we need to mentally break it up into its Fourier (sine) components….and
see the effect of each sine component on the network. Later we can reassemble the responses
back. Turns out, it is usually sufficient to assume that of all the harmonic constituents of the
“square-ish” waveform, only its fundamental (first) harmonic has the main, or dominating,
effect. Therefore, resonant topologies are commonly studied using the First Harmonic
Approximation” (FHA). We ignore other harmonics.
Using FHA, it can be shown that the peak-to-peak value of the first harmonic (sine wave
component) of the input voltage, exceeds the peak-to-peak of the square wave responsible for
it, by the simple factor of 4/π=1.273. But in a half-bridge, as opposed to a full-bridge, we also
have to understand that its resonant capacitor acquires an average DC level of half the supply
voltage, subtracting from the effective input applied to the L-C network. So, in effect we need to
halve the stated peak to peak value, down to 2/π times V IN as indicated in the figure.
Using FHA, and further realizing also that if we “reflect” a given load resistor present on the
output of a DC-DC converter, to an “equivalent” Primary side resistor, we must maintain the
actual dissipation (energy) term even through the process of “reflection”. The energy can’t
change by reflection! So combined with the FHA, it can be shown that a certain “R” present on
the Secondary side (on its rectified DC output) can be reflected back to the Primary, to appear
as an effective (AC) resistor of value 8N2/ π2., as shown in Figure 1.5.
Note: Keep in mind that the turns ratio used by us is “N= NP/NS” whereas most simulator
packages prefer to call turns ratio as NS/NP. We may have to take the reciprocal!
So, the “equivalent AC load resistor” is actually of smaller value (in ohms) than the
corresponding DC load resistor from where it came from, by the factor 8/ π 2 = 0.8107, all from
FHA analysis. That equivalent AC load resistor, we can then reflect onto the Primary, as a
typically larger resistor (since N >1 typically), by the factor N2 (to keep dissipation the same too,
despite reflection). Combined, we get the overall factor of 8N2/ π2, as indicated in Figure 1.5.
Similarly, maintaining the stored energy too, in the process of reflection, and realizing that
voltage scales (typically increases), by a factor ×N in going from the Secondary to the Primary,
and current scales (typically decreases) by the factor ×(1/N) in doing the same, a capacitor must
reflect in going from the Secondary to the Primary, as CS/N2 (its value typically decreases since
its voltage has increased in the reflection process, but energy is fixed). Similarly, an inductor
reflects (typically increases, since the current through it has decreased) as LS ×N2. All this is
based on maintaining the same “1/2(LI2)” and “(CV2/2)” i.e. energy terms, in going from the
Secondary to the Primary!
24
Note: That is why in classical power conversion, we strongly recommend lowering the “trace
inductance” on the Secondary-side of a Flyback converter. Even the typical 20nH/inch reflects
into the Primary by turns-ratio squared, becoming a rather huge stray inductance whose energy
has to be usually just burnt (dumped) in the Zener clamp at every turn-OFF transition.
We will not derive any of these formulae here, as most references on the subject usually carry
all that. We will focus on gathering and using all the tools necessary for a very thorough and
complete design.
Note that in Figure 1.5, the “tuning capacitor CS, which as mentioned, everyone seems to have
rather incomprehensibly introduced in modern WPT, reflects to the Primary and creates
another resonant peak! It is a fundamental flaw, as it leads to the double-peak gain profile
possibility hinted at in Figure 1.2, which as mentioned, is not even amenable to any known
DoC, or any apparent/exotic control loop strategy—to ensure ZVS for example, or even a
dependable output. And worse, all the “fixed frequency” approaches, i.e. changing phase, or
input voltage, or duty cycle, are even less promising than frequency modulation methods in this
regard.
As suggested, our first stop should have been the better-understood, more-familiar, LLC
converter. From there, on to WPT. Because we realize that both LLC converters and WPT
systems need to be designed in much the same way. They are the same topology! Or at least
should have been, had modern WPT implementations been done right. LLC and WPT constitute
two sides of the same resonant coin.
As indicated, what is so “wrong” in WPT today, and in fact totally unnecessary, and not there in
LLC (for good reason), is the presence of the Secondary-side capacitor CS shown in Figure 1.4b.
This figure is exactly where all attempts in WPT are stuck today, without even knowing it.
As also indicated, it all came about perhaps based on a wishful attempt to tune C P and LP in the
Tx, with LS and CS on the receiver, radio-frequency style, expecting maximum power. Both LP and
LS were in effect measured independently, forgetting the effects of their mutual interaction
when brought closer. But surprisingly, everybody is doing exactly that in WPT today, though
the tuning frequencies may differ.
They all apparently assume what is now presented more clearly in Figure 1.6a: that if they
tune the Tx to say 100kHz (as Qi does), and the Rx to the same 100kHz, independently, then the
net resonant response when the coils are brought closer, is still a single peak at 100kHz. But
what really happens, is also shown in the same figure, and it can be easily proven using the
basic, but accurate-enough, “transformer model” shown in Figure 1.5. The gain profile not only
shifts sideways, but splits into two virtually unpredictable peaks (Figure 1.6b) in their case.
Unpredictable in “height”, just for starters (so how can you ever predict the “power capability”
of any L-C network anymore?). Also, not only shifted sideways, but two shifting peaks (no
bullet-proof control loop seems even possible anymore). Plus, the right of one peak is also the
25
Figure 1.6: What the world assumed was happening in WPT, and what really did
left of the other (so where exactly do we need to “aim” for, to get ZVS?). There are problems
galore for Figure 1.6b.
In contrast, had we gone down the LLC path, we would arrive at Figure 1.6c. That recognizes
and predicts both the extent of the flattening, and the lateral shifting. It has only one peak too!
It is thus fully amenable to a properly-designed, smart control loop. Though of course, in WPT,
to handle the variable K, we will need to progress on to proprietary control loops! That does
get very, very complicated. Not to be discussed here either.
Keep it simple stupid (K.I.S.S.) as they say, but don’t be stupid enough to oversimplify either!
All those who did exactly that, and tried to keep a fixed switching frequency, resorting to duty-
cycle, phase or VIN modulation to control the output, didn’t realize that they all will land up on
the wrong side of the resonant peak (or peaks in their case) sooner or later, i.e. with or without
any CS present! So: poor “user experience” eventually? Probably worse than basic Qi too.
26
1.11 The Elusive Resonant Peak
Now let’s explain the lateral shifting of the (single) resonant peak using the simple “transformer
model” shown in Figure 1.5. The Primary inductance LP splits into a Secondary-side coupled
portion “K×LP” and a leakage or uncoupled portion (1-K)×LP. We are ignoring the encircled,
flawed CS now. See the equivalent circuit being carried over to Figure 1.7, to explain one of the
key reasons why the LLC became so popular to start with, as compared to other “competing”
resonant topologies such as the series-resonant converter (“SRC”).
We now show in Figure 1.7 how varying loads produce a predictable shifting of the resonant
peak as indicated, between two extremes. One extreme occurs for no load, called “fLO”, the other
for very high loading, called “fHI”. One peak location is based on the entire Primary inductance
coming into play, the other on leakage inductance only. The gain curves for intermediate loads
fall somewhere between the two operating frequency limits, fHI and fLO. Note that it is always a
single peak in our case, but shifting between two extremes. It is not a double peak! No
constituent curve in the set, has two “humps”. Look closely.
This is what led to the big perceived advantage of the LLC. It simplified the control loop and the
design of switches and all associated circuitry too (no need to go to very, very high frequencies
just to reduce gain, say at very light loads, as was the situation with the “competing” SRC
topology). This also leads to a rather narrow, manageable band of EMI. We could basically keep
fHI below 150kHz, just to pass EMI testing more easily, and restrict the operation between fHI
and fLO, for all loads.
Notice how all the gain curves intersect exactly at the point described by: Gain=1 and f= fHI. Let
us call this the “magic operating point” (“MOP”). All the curves intersect here because at this
frequency, the leakage inductance and the resonant capacitor are in complete resonance, and
form a “dead short” in effect as explained previously! With that “dead short, the entire input
voltage will appear unfettered, straight at the output, irrespective of load! So, V IN = VO. i.e. Gain
=1, for any load!
Note the eerie similarities to the “load-independent DC transfer function” of a Buck as discussed
earlier! So theoretically, now, if we operate precisely at fHI, we would not need to do anything at
all to adjust the output, for any load from zero to maximum! Provided we had designed our circuit
for a gain target of exactly 1. What that means is the set reflected output voltage (often called
VOR in a Flyback for example), would need to be exactly equal to the applied DC input voltage.
Then, clearly: Gain =1.
Theoretically, we could then even choose to operate precisely at fHI always. No control loop in
place. It does seem plausible and promising too: we would be operating to the right of any
applicable resonant curve always (for any load), and we would thus always get ZVS too,
automatically. Since it is fixed frequency, it is good for EMI testing too! All we really have to do
is to specify the right turns ratio so that N × VO = VIN. As simple as that! It should work! A perfect
topology it seems! What if anything, is wrong?
Like all things “power”, nothing is ever straightforward! First, this series L-C “dead-short”
visualization is true only in the first-order. When we introduce parasitics, the output starts to
droop at high load currents due to simple resistive voltage-divider action between the DCR of
the Tx coil (RDS of FETs too), and the load resistor.
27
Figure 1.7: Varying load in an LLC, causes a frequency shift, depending on coupling
28
So, we may opt to forcibly ensure that we have very, very low parasitics, just to make the output
droop “acceptable” (whatever that means). But that will require very, very thick expensive
copper! Instead, how about using resonance (more) effectively, to literally boost (amplify) the
output voltage if it starts to droop too much? In fact, then maybe we can amplify it even further
to compensate for a droop in the input too! That will be much smarter and more cost-effective,
compared to using utterly thick copper windings! That is actually the right way to do it, the
basic method underlying our graphical aids too. It does demand a lot of design expertise though,
at least to generate those curves. But to use them is very easy, as we will learn.
In Figure 1.7 we show the basic problem with the fixed-frequency (no control loop) method. If
the set fixed frequency does not coincide exactly with fHI, and/or the desired/set gain is not
exactly unity, we will land up slightly to one side of the MoP, where we can see the gain curves
can drop off dramatically (under our feet, quite literally) as we change load, especially if we are
to the left of the MoP. If we land up there, we would likely experience a dramatic, droop in the
output, for load variations. Typical tolerances in L and C, not to mention variations in coupling
(which also affects the leakage inductance, and thereby fHI), will play havoc with the desired
output. The only way out is to manually tune every single transmitter in production, to the exact
resonant frequency of the specific L and C we happen to have on a given board (yes coupling
variations included).
We conclude that the constant, fixed-frequency approach, may make a “good (tuned) demo
board” to impress the uninformed with, but it doesn’t promise to ever qualify as a commercial
product! Not without a control loop. That approach may have “sort-of” worked with a classical
power topology, as we too indicated was possible for a Buck, on the basis of its simple DC
transfer function, but resonance is not to be underestimated. We do need a control loop here,
and for many more reasons than we ever needed it in a Buck!
If we have no tolerance even for L and C variations, how can we ever handle the second most vexing
problem of all: input voltage variations. The most vexing of course being the variations in K,
especially in WPT systems, as we had mentioned.
The graphical technique proposed here will handle input droops too. We realize that since Gain
= VO/VIN, if VIN falls, then for maintaining a fixed VO, we just need to increase the Gain by the
same factor! That should be easy to accomplish, almost automatically by a control loop, say one
based on frequency modulation, as is commonly used in LLC converters. Provided we have the
right “bulge” in the selected resonant curve, we can simply head towards f LO, stopping wherever
we reach the desired output. Indeed, handling K-variability is another huge challenge
altogether.
In this chapter we are recommending that we must always, for the LLC and even for WPT, try
to ensure that our system restricts its operating region to the region between f HI and fLO. This
we can ensure by designing it to have a “gain target” (i.e. VOR/VIN where VOR = N× VO.), set slightly
greater than 1 (~1.05) at maximum load and maximum input voltage. The rest of the procedure
will become clear shortly. See Figure 1.7. We can see that the 1.05 gain target ensures we are
slightly to the left of the MoP! Not to the right. Ever, as we can show! We will automatically move
to the left of fHI if the input voltage falls. But if the output is, say, suddenly unloaded, the output
will jump momentarily higher (above the set 1.05).
29
The control loop will then try to reduce it by increasing the frequency, but won’t need to go
above fHI, since even the no-load curve gain drops to below 1.05 near the MoP, which we know
occurs at fHI. It will thus all work just fine between two frequency extremes. But the key is to set
the gain target (using the correct turns ratio in effect), to slightly above 1. Most LLC designers do
so, but rather inadvertently.
As a side note, all WPT systems around us today actually ignore even the above simple guidance
concerning the set gain target/desired VOR. We already know that all the “approved” Qi
transmitters turned a blind eye to the critical C/L ratio, so the “power capability of an LC
network” was not understood. But now we must recognize that all their “approved” receiver
coils, have different turns ratios with respect to even a single transmitter, and also, the receivers
are virtually allowed to set output voltages of their choice (no real guidelines to set V O, and to
thereby establish the desirable VOR, which we just learned should be set slightly higher than
VINMAX).
In general, the gain target of any existing WPT technology out there could just end up being less
than 1. Unknowingly! As a result, WPT systems doing “frequency modulation” will end up trying
to control the output by operating to the right of “fHI” as indicated in Figure 1.7. Not strictly to
its left as strongly recommended. One obvious drawback of that, besides higher (broader) EMI,
is that at very light loads, the transmitter can no longer reduce the gain any further by raising
the frequency, so the receiver has to suddenly apply an internal overvoltage clamp (usually
dissipative), just to protect itself. Keep in mind that Qi restricts its operation between 115- 205
kHz, to simplify the design of the power stage. But if the control loop tries to increase the
frequency above the upper limit of 205kHz, internal clamps are activated within the receiver to
try and protect itself. Not elegant! A heavy price to pay for ignoring both power capability, and
VOR.
Besides, a key question for any clamp is: for how long is it safe? Or even practical? That in fact,
is one of the reasons Qi can never do laptops! Imagine applying typically 50-100 Watt Zener
clamps inside the laptop, for over a second (for that is how long the correction loop of Qi can be
in practice, with a few missed packets, as has been commonly observed).
Now add to that mess the K-variability factor, and the double peaking issue arising from CS,
neither of which we have even considered while doing the gain plots in Figure 1.7, and all bets
are off! Proprietary solutions are probably the only solution to these very complex issues and
the unpredictable interplay of their influencers.
As we had declared initially: the awesome power and potential that resonance brings to our
doorstep quite literally, comes with a hefty price tag: it demands total design expertise. It could
have been corrected, but that window of opportunity has firmly closed. Only proprietary
solutions exist to solve this issue (too).
Before we reveal our design procedure, we try to get a final “feel” for the magnetics of resonant
topologies, the LLC in particular, and why that is another big attraction to use the topology,
compared to the magnetics in classical power conversion. A lot of what we will learn applies in
principle to WPT systems too.
30
Figure 1.8 provides a good visual impression of the key advantage: the lowered size (and cost)
of LLC magnetics—making it akin to a Forward converter transformer in size, plus without the
need for any output choke. Focusing on the transformer only for now, it is small, because no
fraction of the useful power delivered to the load in an LLC topology, or in a Forward converter
transformer, needs to be ever stored, even temporarily, within its magnetic core during the
power transfer/conversion process. The reason is that the Secondary windings conduct at the
very same time as the Primary windings, so the flux associated with the load current flowing in
the Secondary, is fully canceled by the additional flow of current flowing into the Primary
winding to support it. The input current component associated with the actual delivery of useful
power to the Secondary, is obviously load-dependent. The magnetization, excitation
component is not. The latter leads to the steady conduction loss term we discussed earlier, and
the residual flux at all times within the transformer core.
31
Since any flux attributable to the load-dependent currents flowing in the Primary and
Secondary windings, cancel each other out, the net flux we are left with in the core is still, just
the one related to the initial magnetization current component, which was and remains load-
independent. Think of the flux as the one present at zero load current. It doesn’t budge with
load! Nature opposes any change. And it has done so very successfully here! Fully!
If a Forward converter transformer, or an LLC/WPT magnetic core, is made visibly larger for
higher powers, it is really not to support higher load currents per se. It is made “bigger” only to
accommodate the thicker copper windings constituting the Primary and Secondary (more
window area, or larger surface area for better cooling). And for the same reason, the “core loss”
term in a Forward converter’s transformer or in an LLC or WPT system, is also just based on
the zero-load-current case! Not for maximum load current. How convenient! If it gets hot, it is
not core losses from within the core, just conduction losses from the adjacent copper, to
consider and reduce.
To re-iterate: the transformer needs to store only the energy related to the load-independent,
“magnetization current” component, which can be intuitively thought of as an excitation
component necessary to initiate basic “transformer action”, thanks to Faraday’s law of magnetic
induction. Nothing related to the actual power transfer process is stored in the core! So
typically, in an LLC or Forward converter, the transformer volume requirement is roughly half
to one-third the size of a Flyback transformer of the same power capability.
In a Flyback transformer, the Secondary windings do not conduct at the same time as the
Primary windings. So, we basically need to store, all the energy meant for delivery to the output,
inside the transformer core, during the ON-time, then pull it all out during the OFF-time. The
Flyback transformer core is thus a bit like an Amazon warehouse, for energy! It needs to be big.
For the same reason too: everything that goes out, must be stored there at some point of its
journey.
But the Forward converter, unlike the LLC, needs an output choke, to finally store a certain
fraction of the load-dependent energy too. That is because energy storage in a magnetic core is
fundamental to the power transfer process of all PWM-based classical power topologies.
Though by different amounts. That aspect was discussed in detail in Chapter 5 of Switching
Power Supplies A-Z, Second Edition, in particular on Page 208. Reproduced here partially:
PIN P P
BUCK 1 D ; BOOST IN D ; BUC K BO OST IN
f f f
This tells us how much energy per cycle needs to be “cycled/stored” for the three fundamental
(classical) topologies. Since PIN/f, is the energy per cycle, we learn that the Buck-Boost (or the
Flyback) need to temporarily store (and then release) 100% of the energy making its way to
the output. In comparison, Buck inductors tend to be small, especially when the amount of
“bucking (stepping-down)” asked of them, is less. And why Flyback (Buck-Boost) magnetics are
so much bigger.
Basically, in the case of the Flyback topology, its transformer is its energy-storage element too,
besides providing “transformer action” (voltage scaling and isolation in particular). It needs to
temporarily store 100% of the energy transferred (not any fraction of the duty cycle, D), since
a Flyback is just a Buck-Boost derivative. Similarly, a Forward converter, despite a simple
32
transformer, does need an output choke, since it is a Buck-derived topology, and therefore
needs to store a certain fraction (proportional to 1-D) of the power transferred.
All initial points are in perspective by now, hopefully providing us a much better “feel” for
resonance. So, we will now get deep into the simplified but accurate design procedure suitable
for any LLC and (correctly designed, i.e. no CS) WPT system.
As indicated, we just need to take a kernel, study it very well, and then use scaling laws to take
it to any power, frequency and input voltage level. But we also want to introduce a factor for
establishing a certain gain margin to serve us well when the input voltage starts to droop, i.e.
to handle input variations! A lot of that was already demonstrated in Switching Power Supply,
Design and Optimization, Second Edition, but the following key items, were not considered,
which we want to do here:
a) How to scale as per input voltages. In other words, if the kernel was studied for an input
DC range of 32V to 52V input, and now we want our LLC converter to work from 200V
to 400V DC. How does that affect our overall scaling of components?
b) How to handle different coupling coefficients? In the previous exercise, we had a fixed K
of 0.9 and when we scaled, we kept the same K. In reality, we want to consider any K,
even very low couplings, as in WPT.
In our previous exercise we had initially started off with a certain very low frequency kernel,
and then scaled that to become a 25.5W LLC converter for a Power over Ethernet (PoE)
application (32V to 52V DC input). So just for historical continuity, we have for the graphical
aids presented herein, used the same values of LP and CP (57.2µH and 225.8nF) that we had
arrived at, as a result of scaling our very first very low-frequency kernel. Now, we will just use
these suggested PoE converter values as the kernel for the next scaling exercise here. Since we
are scaling anyway, we could use any initial kernel. So, we then introduced these LP and CP
values into a general Mathcad spreadsheet to generate the graphical aids presented. The
spreadheet is in the Appendix to Chapter 1.
Just to throw more light on the gain curves we will get in an LLC topology, as an intermediate
step, we used the above Mathcad spreadsheet to suggest the “recommended” load resistor of
20.6 ohms. And then we used multiples, or fractions, of that to generate the stacked gain plots
in Figure 1.9. Note that 20.6 ohms is the equivalent AC load resistor to be applied to the
equivalent AC-AC, non-isolated model shown in Figure 1.5c. It was recommended by the
spreadsheet, to be able to deliver the desired average power of 25.5 Watts in the final converter,
over the desired input range of 32 to 52 VDC.
We have also used Mathcad to predict exactly where the entire LC network appears “inductive”
(solid lines) and where it appears “capacitive” (dashed/dotted lines). It tells us where ZVS will
occur and where it won’t! A bit to our initial surprise we do see that the changeover from solid
to dotted does not occur precisely at the “peak” value, as always somehow assumed. The reason
is that the peak is determined where the leakage (1-K)×LP and resonant capacitor CP, are in
perfect resonance (“dead short”)… but on closer examination, that does not necessarily coincide
exactly where the entire network (including LP ×(K)) “resonates”. Basically there is a slight shift
on account of the magnetizing inductance and the
33
Figure 1.9: Mathcad plot of gain for an LLC with and without Cs
load entering into the picture. The practical implication of that is it may not be enough to be
merely “slightly to the right” of the peak voltage for ensuring ZVS. Maybe a little more than
“slightly”! But in any case, the transition from lossless to lossy switching is a bit gradual anyway,
so though the losses may go up slightly as we leave the solid region of the curves, and enter the
dotted portion (on our way to the peak), they might still be acceptable enough, to not warrant
more effort on our part to tweak the control algorithm.
In the lower part of Figure 1.9, we also show the complete mess the mere introduction of a
reflected capacitance of 100nF (arbitrarily chosen), from the Secondary to the Primary side,
causes. On display here is all what is wrong with modern WPT: the “double resonator” flaw we
had talked a lot about! Notice there are two peaks, behaving differently. The one which shifted
to the right, has an even more dramatic shift than anticipated without CS.
34
So in the case of Qi, a good question to also ask is: what if the resonant peak has not just moved
to “141kHz”, as we often say as a ballpark estimate of the shifting, but actually to above 205kHz,
(the upper operating limit of Qi)? How can we even ever hope to regulate? We are stuck on the
wrong (left) side of the resonant curve. If we lower the frequency, the gain collapses further!
And of course we are not in a region where ZVS will occur. Unless the “double resonator flaw”
helps us inadvertently in this case. Because we have another peak with CS present. This has
actually shifted to the left of the no-load peak location fLO. Very hard to ever predict how any
control loop will react to all this mess. Most likely, a bunch of “mysterious turn-offs”. And then:
“inherently poor user experience of inductive systems”.
Let us move on, without CS here-ever-after!
With the above kernel, we have generated two key design curves, shown in Figure 1.10, and
Figure 1.11. Keep in mind that the values used were LP = 57.2µH and CP = 225.8nF. So, here is
the challenge:
We want to deliver 900W into a 48V output. Select the best L and C values for the converter for
an estimated coupling of K = 0.5. For EMI compliance reasons, we want to stay below 150kHz
guaranteed. What are the best operating frequencies to carry out meaningful simulations, to
test its performance? We are assuming an input DC varying from 200 to 400V.
On account of the high-power the obvious choice is a full bridge, not a half bridge. At 400VDC
therefore, the equivalent AC wave applied to the input of the resonant network is as per FHA:
4
VIN M AX _ AC VINM AX _ DC Volts
4
VINM AX _ AC 400 509.3 Volts
This is the equivalent amplitude of the sine-wave applied as per FHA. (For a half bridge we
would have divided this by 2).
From Figure 1.10, For a coupling of 0.5 and a VINMAX/VINMIN ratio (“gain factor”) of 2 (to allow
us to reach 200VDC from the original 400VDC), we see that the recommended resistor to be
placed on the output of our non-isolated AC-AC equivalent circuit of Figure 1.5c, is 14.7 ohms.
This load resistor is actually valid, irrespective of the applied/assumed input actually, because it
produces just the right amount of “bulge” in the gain profile curve, in conjunction with the
selected C and L (the C/L ratio in effect), to allow for a Gain of 2, occurring somewhere between
fHI and fLO, before the gain curve rolls off! So, that there is no hint of overdesign either! Just the
right “bulge” in the resonance curve.
Now, had we applied an input of 400VDC through a full bridge, or an equivalent AC voltage of
509.3 Volts, the power at the peak of the AC sine wave input would have been
V2 509.32
Peak _ AC_ Power 17650 Watts
R 14.7
35
Figure 1.10: Resistance Lookup Aid based on Kernel
Note that the average of any Sine-squared function is known to be half. In other words, the
average power we got from the kernel, but with the desired maximum input applied to it, is
But we want only 900W from our proposed LLC converter, So the Power Scaling Factor we need
to apply is less than 1 in our case:
Coming to the frequency scaling factor, the f HI of our kernel (for the desired K=0.5) is
1 1
fhi _ Kernel 62.63kHz
2 1 K L P C P 2 1 0.5 57.2 225.8n
36
To stay below 150kHz, we would like to set fHI of the new converter exactly at 145kHz. So, the
desired frequency scaling factor is
These are the values to try out in a simulation. They establish the desired power capability of
the resonant network in effect, at the desired input voltage, allowing for the desired input
variation too, and based on out estimate of K. In the simulation, the calculated LP will need to
be split up as follows
That is exactly how voltages scale through transformer, and we are just using the same principle
to reflect 400V into the Primary, from 48V at the Secondary.
Note: when we actually wind the transformer, we will be using full (or maybe half) integral
turns, so we will not get exactly 8.33 as recommended above. Further, as mentioned, we want
to aim for a slightly higher VOR than VINMAX, to get a gain target slightly more than unity. Because
37
we want to be able to do full regulation over the entire load range restricting ourselves to the
left of fHI. So if we manage to set the actual turns ratio NP/NS slightly greater than the
recommended value above, it would work better. We may have to wind a few iterations of the
transformer, to ensure that we stay within fLO and fHI always, with no “mysterious” dropouts.
As mentioned, simulators usually use the inverse of this as their “turns ratio”. So, for them we
need to enter
VO 48
NSIM 0.12
VINMAX 400
We also realize that in the simulator, to test it for 900W (at desired output voltage), we simply
need to put a resistive load of value
VO2 482
Rload 2.56 Ohms
Power 900
Finally, looking at Figure 1.11, we see that the gain factor = 2 curve intersects with the K=0.5
to give us a frequency ratio of 0.75. That is the ratio of the location of the resonant peak of the
applicable resonant curve with respect to fHI. Now, this ratio will remain unchanged through
the scaling exercise, though the fHI of course has changed, and is now at 145kHz. So assuming
the same ratio, the location of the resonant peak, at which we should be able to get 900W at
200VDC input is 0.75 × 145kHz = 108.75kHz. That is the frequency we need to use at the lowest
input voltage, to confirm maximum RMS current too.
Note: There is no point doing a simulation at maximum input and fHI, i.e. 145kHz to “validate”
our design. That simulation point is meaningless. We will definitely get the desired output
voltage because we are assuming that at least the correct turns ratio has been set in the
transformer. But we also get almost any power we want really! We can keep reducing the load
resistor, and we can get 1800W, or 3000W, you name it! Because as we explained, at exactly
145kHz (“fHI”), we essentially have a converter that can deliver as much power as we want,
assuming ideal conditions (parasitics not included, or too small to matter).
What really determines that we have arrived at the right “shape” (no overdesign in selecting L
and C, and also no insufficient power capability) is the ability of the resulting resonant curve to
bulge just enough to give us the required gain factor, to compensate exactly for the droop we
expect at the input. Very simply, if the input can drop to half its maximum value, we need to
arrive at a curve that just about reaches Gain = 2 at its peak. If the curve is a bit “too flat”, say
reaching only Gain = 1.8, we will be unable to get maximum rated power at the minimum input.
If it is too peaky even at 900W, we have “overdesign”. We will be pulling 900W from the
converter at minimum input, but still be well below its peak, so we can lower the frequency
quite a bit more, and keep getting more and more power before it rolls off! Not recommended.
38
Figure 1.11: The location of the resonant peak
As we indicated, every LC network has an inherent power capability, which in practice affects
the input range we want it to handle. Knowing that fact, we can arrive at an optimal design. If a
converter is inadvertently designed for more power by an incorrect choice of L-C components,
then of course, if we understand basic power scaling, that has come about because we have too
large a C value, and too small an L value. Because to double power for example, we always halve
the inductance and double the capacitance! Keep in mind that resonant capacitors are more
expensive than coils or windings! Overdesign will cost us money. Unnecessary money.
Before we build what we just designed, we still want to know the RMS currents and so on, to
choose our FETs correctly for example. We mentioned that closed form equations in literature,
though impressive looking, are all approximate. Why? Because they are all based on the First
Harmonic Approximation! So, our effort has been to point us to the right selection of the LC
components, and suggest a close-enough operating point, to run simulations at. Thereby we will
get far more accurate estimates of RMS currents, for example.
39
So finally in Figure 1.12, we present the results of the first-shot simulation, based on the results
of our graphical method, iself based on our stunningly simple scaling laws. In the first iteration
we are setting the frequency to the predicted 108.75kHz. The input voltage of course is the
minimum of 200VDC.
The key waveforms are presented. We see that though we got a noticeably higher output, which
is OK, there is something troubling too. We do not have ZVS. The transistor turned ON, into a
pedestal of positive current through it, so there will now be crossover losses. We don’t need to
“see” it here, but they are present. Why is that? There are actually two contributing reasons
a) As pointed out in Figure 1.9, there is a possibility that the network appears capacitive
at the peak value. We had discussed that.
b) Note that the output, even with diodes present, is higher than the predicted 48V. So we
have more gain than we expected. However, if we simulate the “equivalent” AC-AC
circuit, we do get exactly the predicted output voltage and thus predicted power. So part
of the error is from the FHA itself. What we are also thus learning is that going to a full
DC-DC circuit actually helps give us more power than we expected on the basis of the
“equivalent” AC-AC circuit. The ignored harmonics seem to be helping us! So in that
sense, the FHA is a good approximation: it errs on the conservative side. We won’t be
caught in a situation of less power than FHA predicted. But it also affects the estimate of
the best operating point. As a result, we may need to tweak it for a second simulation
run.
Since the switch waveforms revealed we were on the capacitive side (left side) of the resonant
peak, we should try increasing the frequency in small steps, till we see that little negative glitch
(encircled) in the switches just before they turn ON. That means that just before the switch
turned ON, current was going reverse: through its body-diode (or through the Schottky diode
placed across it: recommended for higher reliability, to avoid shoot-through). So we have ZVS
now, because that body-diode currrent flow has forced the voltage across the switch too, to be
almost zero at the start of its transition from non-conducting to conducting states. No crossover
loss term.
In our case, after tweaking the frequency a bit higher, we arrive in the ZVS region at 115kHz.
Note that the voltage is still a bit too high. But the control loop will take care of that by settling
at a slightly higher frequency than even 115kHz. We don’t need to know what that point exactly
is though. The last simulation should give realistic estimates of RMS current ratings than we
would normally ever get using closed-form equations available in literature, all based on FHA.
For example, this one below for gain of an LLC as a function of frequency, surprisingly often
found in literature to be propagating almost endlessly with a mistake/typo, going undetected
from one “respectable” reference App Note to another, without anyone actually inserting it into
a Mathcad spreadsheet and plotting it to check it, as we did. Here it is, now corrected:
40
Figure 1.12: Design Validation via Simulations (900W)
41
1
Gain f
2 2
1 1 K 1
K 1 x 2 Q x x
How about the PoE design example, using a half bridge? Let us reconfirm the values for that too.
We want to ensure we have 25.5 Watts over the range 32-52 VDC. We want an output voltage
of 12V. So, the gain (bulge) we want in our resonant curve is 52/32 = 1.625. Suppose we believe
that the transformer we build can/will have a coupling coefficient of K=0.9. We are going to see
if we need to scale the values used in the kernel of ours: LP = 57.2µH and CP = 225.8nF. Hopefully
we don’t have to scale them too much! Or there is something really wrong with our kernel if it
can’t even do what we think it can! Of course the graphs were written for any K and any input
variation in the Mathcad spreadsheet but here, for starters, we are just sticking to the PoE
example we had in Switching Power Supply Design and Optimization, Second edition. In other
words, so far we are looking only at K=0.9 and 32-52 VDC input range.
A slight tweak now: in the book, we had set a nominal gain target of 1.05. And recommended,
as we do here too, to set the turns ratio accordingly. In the book, as we are going to do here now,
we calculated the gain based on an initial value of 1.05. So though 52/32 = 1.625, we did not use
1.625, but instead the “additional gain” over 1.05. So, we took the gain factor as 1.625/1.05 =
1.55 instead of 1.625. Either way it is all in the ballpark and we should be very close.
Now, looking at our graphical aid, Figure 1.10, for K=0.9 and a gain factor of 1.55, we get a little
over 20 ohms. The full Mathcad spreadsheet presented in the Appendix gives an exact
numerical value of 20.6 ohms. Which is what we had also used in Figure 1.9 as the
recommended value “Rtrial1”. That was the basic value, from which we had then generated
multiples of load, just for display purposes.
Now, let us first confirm the power we can get out of this at the maximum input voltage. We are
applying 52VDC through a half bridge. So, since this is a half bridge, not a full bridge, its
equivalent AC voltage is
2
VIN M AX _ AC VINM AX _ DC Volts
2
VINMAX _ AC 52 33.10 Volts
33.12
Peak _ AC_ Power 53.18 Watts
20.6
Peak _ AC _ Power 53.18
Average _ Kernel _ Power 26.6 Watts
2 2
42
Which is pretty close to the 25.5W we want. We can think of this as a bit higher to account for
estimated efficiency degradation from 100%! It is very much in the ballpark and we do not need
any scaling for power, just as we had expected and hoped for! Because that is exactly what we
had declared our kernel could do in the first place! So, treat this as a revalidation of the kernel.
As for any required frequency scaling, if we look at Switching Power Supply Design and
Optimization, Second Edition, we will see that the kernel components were initially chosen such
that with K=0.9, fHI would be at 140kHz. Which is a good target for EMI reasons. Let us check
1 1
fhi _ Kernel 140.0kHz
2 1 K L P C P 2 1 0.9 57.2 225.8n
So, as we had expected, we don’t need a scaling factor for frequency either. The kernel does
exactly what the Mathcad spreadsheet said it would.
From Figure 1.11, we see that the frequency ratio predicted in this case is 0.38 × fHI. That leads
to 0.38 × 140 = 53.2 kHz. The Mathcad spreadsheet predicts 52.25kHz (and 115.1 kHz for the
slightly boosted nominal gain value choice of 1.05, instead of 1). Close enough!
We want 12VDC output. But we do want a VOR of about 26VDC (since that is the effective input
DC voltage in a half bridge). We recommend to set VOR slightly higher though, just to ensure a
gain of 1.05, instead of 1). Anyway, the nominal turns ratio is around 26/12 = 2.167 (we may
also want to leave margin for two diode drops if we are not using synchronous rectification and
instead a standard four-diode bridge rectifier).
Note: Since we are applying 52VDC to a half bridge, and since the resonant capacitor will end
up with a DC level of 26V, it is effectively only applying 26VDC to the bridge. Our VOR is based
on that. That is why we got a turns ratio of 2.167 above, NOT 4.33! Beware!
If we want to guarantee 25.5 Watts from a 12V output, the desired resistor we want to put in
our simulator is
VO2 122
Rload 5.65 ohms
Power 25.5
Note that Figure 1.10 had recommended we use 20.6 ohms as the Primary-side AC load
resistor. Had we reflected this to the Secondary side as an equivalent DC load resistor, in a DC-
DC converter, we would get as per Figure 1.5c
2 2
Rload _ reflected Rac 20.6 5.4 ohms
8 N2 8 2.1672
Now comes the big doubt: that is also very close to the 5.65 ohms calculated by simple V 2/R
power cakculations. So we have a basic question here once again, just when we had started
thinking we had mastered resonance! We ask: why did we even need Figure 1.10? We could
have just reflected the desired DC load resistor, based on desired power, over to the Primary
side as an equivalent AC load resistor (using FHA and turns ratio squared). That would give us
almost the same answer as the 20.6 ohms suggestion we got from Figure 1.10! So did we even
need Figure 1.10? Was this all in vain?
43
Wrong! Resonance is way trickier. Look closely at Figure 1.10 again. We ask: what if we had
wanted our PoE converter to work over a 1:3.5 input range? That would be 52VDC to
52/3=14.9VDC. Now, in this case, Figure 1.10 says, you need to pick an AC load resistor of 49.5
ohms! Not 20.6 ohms or anything close. Intuitively, that is because we want the resonant peak
to “bulge” a bit more at the middle, to compensate for the input droop! We see we now have to
back off on the maximum allowed R, to create that “bulge”. People call this “increasing the Q”.
That is correct. Now, if we still want to guarantee 25.5W, we are in big trouble. This converter
can do much less now. It only works with a much larger resistor.
At the peak AC voltage of 26×4/π = 33.1V, the peak power would now be much less, since we
now have 49.5 ohms across it:
33.12
Peak _ AC_ Power 22.134 Watts
49.5
Peak _ AC _ Power 22.134
Average _ Kernel _ Power 11.07 Watts
2 2
As per Figure 1.11, the frequency ratio for K=0.9 and gain factor 3.5 is about 0.34. So we need
to smulate close to 0.34 × fHI = 0.34 ×140 = 47.6 kHz. The inductance will need to be split for
simulations as
44
Figure 1.13: PoE converter with 1:3.5 Input Voltage Range (AC-AC equivalent too)
In Figure 1.13, we see the schematic and the results of that in Figure 1.14. Note that to firm
things up in our minds, we ran both the equivalent AC-AC circuit and the DC-DC converter. We
have indicated how we need to change input voltage and output load resistors in going between
the two. And since we ran both schematics in one simulation run, the results appear together,
in Figure 1.14. Since the switch waveform has a very, very slight negative glitch at the end of
its current waveform, it is running very slightly capactive. A slight increase in operating
frequency will cause the little negative glitch to now appear at the beginning of its current
waveform, implying ZVS.
And yes, despite the diode drops, we get almost perfectly 12V at the output of our DC-DC
converter, even at 15V input, so our 1:3.5 input variation, hitherto unheard of for LLC
converters (and now you know why!), was tackled completely and correctly, indicating that we
are spot on in our understanding of resonance finally! Just graphical aids (two figures) and a
couple of scaling laws!
The diode waveform always indicates zero-current switching. And it is usually almost half-
sinusoidal. The vacant gap between the pulses occurs only when we are switching tr\o the left
of fHI. It disappears and starts looking more distorted too, not necessarily looking like ZCS, if we
switch to the right of fHI (as Qi, and most others too, typically end up doing, due to random gain
targets and random C/L ratios too, not to mention the crippling double-resonator flaw).
45
Figure 1.14: PoE converter with 1:3.5 Input Voltage Range (simulation results)
1.18 Conclusion
With that we conclude this chapter. We have probably arrived! We have learned to scale a
kernel to any power level, any frequency, and capable of being the “front-end” converter too,
since it is tolerant to quantifiably wide input variations too. No overdesign either. The ability to
change coupling too, to almost any desired value, means we can use it even for relatively loosely
coupled wireless power transfer systems. All this power, quite literally, through two simple
curves, and even simpler scaling laws, hitherto hidden in plain sight. And of course from
resonance itself! We have learned how to tame it, and use it. We can start building proper
converters now, fearlessly.
But the journey is still not fully over! Because we didn’t banish the devil, it just retreated deep
into the details! Now we need a really carefully thought-of control algorithm too! To enable all
this really well. And that is going to be proprietary, by all accounts.
46
1.19 Appendix 1 – MathCad Spreadsheet
47
Figure 1.16: MatchCad Spreadsheet Page 2
48
Figure 1.17: MatchCad Spreadsheet Page 3
49
Figure 1.18: MatchCad Spreadsheet Page 4
50
1.20 Appendix 2 – Alternative LLC design strategy: the Graphical approach
Scaling laws are one of the most important concepts you get from this book. By combining that
laws with the Nicola Rosano’s graphical approach it is possible to get the LLC design in seconds
whatever best design entry point you decide to pick higher than 1 or lower. To grasp some
tricky concepts mentioned in this appendix we strongly recommend to go through the full LLC
chapter first then move on with the graphical approach.
Designing the LLC by using a graphical approach is a very powerful tool because you don’t need
to look at the power stage equations. It is something we use to increase the design speed
further. To get the maximum flexibility by the way this approach needs to be combined with
scaling laws.
Let’s assume to study always the same kernel by sweeping tank resistor. Looking at the input
impedance phase we can see how if the tank resistor increases then the minimum frequency to
preserve ZVS decreases while the maximum tank gain we get in correspondence of these
frequencies tends to increase. The image below clarifies these statements.
T 20.00
R1 1k
L2 900m
Vgain
𝑉 (𝑠) 0.00
g1
Gain (dB)
𝑉 (𝑠)
-10.00
-20.00
-30.00
10 100 1k 10k
Frequency (Hz) Sweeping
Rtank
T 100.00
𝑍𝑖𝑛 𝑝ℎ𝑎𝑠𝑒
𝑍𝑖𝑛(𝑠)
50.00
f2
Phase [deg]
0.00 f3
f1
-50.00 Zin @ 300[Ohm]
Zin @ 450[Ohm]
Zin @ 900[Ohm]
-100.00
10 100 1k 10k
Frequency (Hz)
51
Let’s place all these points on two plots in which we have Rtank on the common X-axy while
reporting on two different Y-axis both vectors: the minimum working frequency vector and
maximum achievable gain vector previously collected.
Plot 1 g3
𝐺𝑎𝑖𝑛
𝑚𝑎𝑥
g1 g2
𝑅
𝑓 f1
f2
f3
Plot 2
Figure 1.20: Collecting the maximum gain and minimum frequency vectors of our LLC kernel
Keeping these two plots in mind let’s play with Ohm’s law now. If we remember the formula to
get the tank output voltage we are aware this formula can be written in two ways: or by using
the maximum bus (input voltage) and the best gain entry point – or vice versa by using the
minimum bus and maximum gain we need.
𝑉 =𝑉 ∙ 𝐺𝑎𝑖𝑛 𝑉 =𝑉 ∙ 𝐺𝑎𝑖𝑛
𝑉 𝐺𝑎𝑖𝑛
𝑉 ∙ 𝐺𝑎𝑖𝑛 =𝑉 ∙ 𝐺𝑎𝑖𝑛 → =
𝑉 𝐺𝑎𝑖𝑛
52
It means by simply calculating the input voltage ratio given as LLC design input data we are able
to check what is the suitable tank resistor able to satisfy the maximum gain we are looking for
at minimum bus.
This is valid whatever LLC seed you pick, whatever input inverter stage you pick,
whatever rectifier you pick, and even whatever turn ratio you will select. It strictly
depends from the initial LLC tank.
The frequency scaling factor is obtained by the ratio between two frequencies: the maximum
frequency set by design over the maximum resonance frequency imposed by the LLC kernel we
studied initially. Obviously you get the same frequency scaling factor number if instead of using
maximum frequencies ratio you decide to use minimum frequencies ratio. The k f factor is then
calculated as:
𝑓
𝑘 =
𝑓
R1 1k
L2 900m
Vgain
Now, if we scale the plot 2 (Y-axy) previously stored for the kf value we are able to get instantly
what is the vector which collects all the real minimum working frequencies in agreement to the
maximum gain we need in the real working conditions. Plot 1 and Plot 2 are then updated as
follows:
53
Plot 1 g3
𝐺𝑎𝑖𝑛 𝑉
=
1.05 𝑉 g2
g1
𝑅
𝑓 ∙𝑘 f1
f2
f3
Plot 2
Figure 1.22: Scaling the maximum gain vector and the minimum frequency vector to the real
working conditions
Let’s do the same calculations for the power scaling factor. If the frequency scaling factor is a
ratio between two frequencies, it comes the power scaling factor is a ratio between two powers.
Which powers do we involve into this ratio? The numerator is the peak power we need to target
at the bridge input taking into account our requirements, the denominator is the peak power
sourced by LLC kernel in the same circuit section, so at the bridge input (or equivalently on the
primary side of the transformer).
𝑃 @ 2∙𝑃 @
𝑘 = = =
𝑃 @ 4
𝑉 ∙ ∙ 1.05
𝜋
𝑅
2 𝑃 @
𝑘 =𝑅 ∙ ∙
(1.05) 𝑉
We recall that in the previous formula the ratio 4/π identifies the gain introduced by the First
harmonic approximation method while the quantity 1.05 identifies the so called best gain entry
point.
The nice feature is that if we move on with calculations we get the power scaling factor can be
expressed as the multiplication of two terms: one term that is Rtank dependent (so not related
to the requirements) and an added term dependent from the requirements. If we sketch the
results on a plot 3 we can extend the previous two plots as follows:
54
Plot 1 g3
𝐺𝑎𝑖𝑛 𝑉
=
1.05 𝑉 g2
g1
𝑅
𝑓 ∙𝑘 f1
f2
f3
Plot 2
𝑅
𝑃 @
𝑘 =𝑌∙
𝑉
Plot 3
At this point the smart engineer will ask: OK everything is clear but what about the higher
frequency vector? Does it stays always the same? The answer is “almost” in the sense it slightly
varies because we selected our best design entry point near the higher resonance that is, by
definition, the “load independent point”. So if we ZOOM in the load independent point area we
get:
55
T 3.00
𝐺𝑎𝑖𝑛 Vgain
Vgain @
@ 300[Ohm]
300[Ohm]
Vgain
Vgain @
@ 450[Ohm]
450[Ohm]
2.00 Vgain
Vgain @
@ 900[Ohm]
900[Ohm]
1.05
1.00 f1 f3
Gain (dB)
𝑉 (𝑠)
f2
𝑉 (𝑠)
0.00
-1.00
-2.00
10 100 1k
Frequency (Hz)
By increasing the tank resistor we see that to keep our best gain entry point constant, the
maximum frequency is called to slightly increase. Then we conclude to map this results taking
into account the real working conditions, the maximum working frequency vector is simply
obtained by multiplying the kf factor previously calculated, for the vector of all maximum
frequencies we recorded by studying the initial LLC kernel (f1, f2, f3 previous figure) at which
the gain has been locked to be at 1.05.
If we do this multiplication placing the results in same way made until now, we can update the
plot 2 as follows. Why the second plot becomes extremely powerful? Because actually it collects
all possible frequencies according the maximum gain I’m looking for plus ZVS (on the primary)
and ZCS (on the secondary) are always guaranteed because of the nature we extracted these
plots.
56
Plot 1 g3
𝐺𝑎𝑖𝑛 𝑉
=
1.05 𝑉 g2
g1
𝑅
f1
𝑓 ∙𝑘 𝑓 ∙𝑘
f2
f3
Plot 2
𝑅
𝑃 @
𝑘 =𝑌∙
𝑉
Plot 3
Figure 1.25: LLC design based on the Scaling laws and the Graphical Approach
57
LLC Alternative Design Procedure Example 1
We invite the reader to extract by his own these three plots. Once you
get it you can use scaling laws for every LLC design you need. Our
“kernel” circuit is shown on the left of plot 1.
Design Requirements: Vin [V] = 52-32; Vout [V] = 12; ; Pout [W] =
25.5; Kcoupling = 0.9; fmax [kHz] = 140;
Design Procedure:
58
LLC Alternative Design Procedure Example 2
Design Requirements:
Vin [V] = 200-400; Vout [V] = 48 ; Pout [W] = 900; Kcoupling = 0.5
; fmax [kHz] = 145; Our kernel circuit is shown on the left of plot 1.
Desing:
∙
= 116.4𝑢 ; 𝐶 = ∙ 𝑘 = 10.4𝑛;
𝑅 = = 2.56; 𝐶 = ∙ ∙∆
= 1.7𝑚
∙1.05 ∙1.05 ∙ .
𝑛= = = = = 8.75
59
The graphical approach is very fast. By the way it is not so intuitive at the beginning to catch
how much max power we can deliver at the maximum input voltage. Let’s focus on the design
Example 1. First thing to do, we need to identify the minimum tank resistor which loads our
LLC kernel. This value is got by focusing on the plot 1 and looking at the intersection of both
asymptotes reported in red (curve knee).
Ymax
Ymin
Note also 𝑘 factor cannot change; the maximum input voltage is constant and cannot change;
then the only value that can differ is the “Y” parameter in the 𝑘 formula below. Actually “Y”
value varies if the tank resistor varies.
𝑃 @
𝑘 =𝑌∙
𝑉
Power extrema can be got by replacing “Y” once with “Ymax” (Figure 1.26) once with “Ymin”
getting the following two equations:
⎧ 𝑘 ∙ 𝑉
𝑃 ⎪ 𝑃 = @𝑉
𝑌
𝑘 =𝑌∙ →
𝑉 ⎨ 𝑘 ∙ 𝑉
⎪𝑃 = @𝑉 𝑜𝑟 𝑉
⎩ 𝑌
60
What happens if want to generalize the previous three plots approach for different inductance
ratios? No problem. The answer is in the next image in which we have collected three cases with
m=L2/L1 equal to 3,6,9. Obviously it can be extended to all cases the reader want. Note also we
have split the frequency plot (previously called plot 2) in two plots more. Actually plot 2 for the
maximum frequency vector and plot 3 for the minimum frequency vector.
LC series LLC
Input Voltage ratio
f min eff = y x kf f max eff = y x f max set
LC series LLC
2
kp = y x Power / Vinmax sin
Figure 1.27: Universal Curves (m=L2/L1) : best design entry gain > 1 (here 1.05)
61
Input Voltage ratio
f max eff = y x f max set
f min eff = y x kf
2
kp = y x Power / Vinmax sin
Figure 1.28: Universal Curves (m=L2/L1) : best design entry gain < 1 (here 0.85)
Let’s start by commenting the Figure 1.27. By sweeping Rtank and focusing on the plot 2 we can
see an apparently “strange” behavior. In the sense for certain tank resistors values – before
reaching the curve cusp – the vector of the maximum working frequencies, whatever
inductance ratio we pick, decreases if Rtank increases. This shape seems to contradict what we
have learnt until now in the sense the Figure 1.25 shows an opposite behavior. What is going
on then?
The answer is simple and the explanation is the following: if we have set the best gain entry
point to be higher than 1, in this case 1.05, there are certain tank resistor values for which the
voltage gain (bode plot) is not able to be higher than 1 due to the circuit nature. Actually all
these tank resistors from 1 Ohm to something near 200 Ohm are not sufficient to boost the gain
above 1.
62
What does the algorithm is trying to say here? Actually the routine is simply saying: look I’m
able to guarantee ZVS somewhere, but I’m not able to give you a gain higher than 1. Practically
speaking my LLC converter is mimicking a series LC converter for this bunch of tank resistor
values. The situation is illustrated in the following figure.
Figure 1.29: Explaining the apparently "strange" behavior for all Rtank values less then 220
Ohm
At the same time where do I get ZVS? We should remember that we get ZVS on the right side of
the resonance. Then this “strange” behavior is justified by the fact that by increasing the tank
resistor value – for all curves that offer a max gain lower than our target – the frequency at
which ZVS kicks in tends to decrease. Careful here: the frequency at which I get ZVS with a low
tank resistor value is higher than the one I get by placing a higher tank resistor value. Is this
always true? The answer is YES until our best gain entry point doesn’t kick in. This happens
by increasing the tank resistor value beyond 200 – 220.
Once we move beyond this value the situation becomes more familiar in the sense the complete
LLC structure is restored. We can see by increasing the tank resistor we are able to intersect
the best gain entry point (1.05). At the same time we can see how the maximum working
frequency tends to increase if the tank resistor increases. This is our expected behavior.
Note also from plot 3 the curves suggest how if the inductance ratio decreases the frequency
span decreases as well to get the same gain. It is visible for example be considering the two
cases in red and blue. You can see how the minimum frequency limit at light load is lower if we
use lower inductance ratio, and it is higher if we use higher inductance ratio.
What happens to the scaling factors (both power and frequency) and to the turn ratio as well if
the inductance ratio changes? The answer is: Nothing, everything stays unchanged!
63
If we look at the Figure 1.28 we see here the universal curves for case with a best gain entry
point lower than one (0.85). Actually here we do not see the strange behavior encountered with
the previous curves reported in Figure 1.27. In the sense the maximum working frequency
vector always increases with the tank resistor. The explanation is simple here as well: whatever
tank resistor you pick the voltage gain curve is able to guarantee the requested gain lower the
one.
Now an added question becomes spontaneous: where is it convenient to work below or above
the higher resonance (series resonance)? This is strongly dependent from your application.
There are tons of design made by working beyond and below the series resonance. Let’s collect
together pro and cons.
Working above the higher resonance (best gain lower than 1): for the same delta gain
required you have a higher frequency span. By the way it’s true you get lower RMS
currents for the same power because the impedance seen by the source increases while,
at the same time, you lose ZCS on the secondary. Plus switching losses are higher
because of higher frequency and even the core loss frequency component related is
higher. A trade off between losses VS efficiency is needed.
Working at the Resonance, ZVS achieved, and ZCS achieved despite diodes are called to
work in CCM.
Working below the higher Resonance (best gain higher than 1): ZVS achieved, we get
DCM on the secondary rectifiers so they are soft switched with ZCS while all
disadvantages we get working beyond the higher resonance this time become
advantages and vice versa. In the sense that working below the higher resonance we get
higher RMS.
We these last notes we end this annex hoping you got how to properly design the LLC faster. Do
the effort once! Study your beloved kernel and then reduce the effort by using scaling laws with
the graphical approach.
64
1.21 Appendix 3 – Building the LLC discrete modulator
Once you will be called to design the LLC one of the obstacles you’ll find is to how properly
model the modulator. Here Nicola Rosano presents one possible solution. The simulator used
here is the TINA from Design soft.
The LLC modulator is nothing different than a Voltage Controlled Oscillator. A variable
frequency sawtooth wave is generated charging and discharging a capacitor with a constant
current that sets the modulator minimum frequency f MIN. Once the loop is called to increase the
modulator frequency the initial current is increased proportionally to the feedback control
voltage charging the capacitor faster and allowing an increase in the modulator frequency f MAX.
The same capacitor is discharged at a constant current (constant rate) regardless of the
switching frequency.
Example:
𝐶 = 100𝑛𝐹; 𝑉 = 5𝑉 ; ∆𝑉 = 3𝑉; 𝑓 = 100𝑘𝐻𝑧; 𝑓 = 300𝑘𝐻𝑧;
𝑉 = 0𝑉; 𝑉 = 5𝑉; 𝑡 = 200𝑛𝑠
Step 2: tune the Schmitt trigger and compute the discharging current
The Schmitt trigger shall be tuned having a hysteresis center equal to:
∆𝑉
𝐻 =𝑉 − = 3.5𝑉
2
while the hysteresis width is:
𝐻 = ∆𝑉 = 3𝑉
All modulator parts are described below:
The U1 block is the hysteresis comparator.
The current generator sets a constant current and charges the capacitor until 𝑉 . For
example if the minimum required frequency is 100kHz and the capacitor ramp swing is
set to be 3V we get:
65
∆𝑉 3
𝐼 =𝐶∙ → 100𝑛 ∙ = 30.61𝑚𝐴
𝑇−𝑡 1
− 200𝑛
100𝑘
I_fmin_charge 30.612245m
U1 1T
*
Vc:1
Ic:2
+
C 100n -
I_fmin_discharge 1.5
The Voltage Controlled current source (VCCS) imposes its discharging current if, and
only if, the hysteresis comparator output is high. In this case the discharging current will
be high due to the short interval time set by 𝑡
∆𝑉 3
𝐼 =𝐶∙ → 100n ∙ = 1.5𝐴
𝑡 300𝑛
T5.00
Vc
Vc Ic
0.00
1.00
Ic
-2.00
0.00 10.00u 20.00u 30.00u 40.00u 50.00u
Time (s)
66
Step 3: Add the remaining logic network
U3 block is a simple inverter while the block U2 is a JK flip flop. It acts almost in the same
way as the the SR Flip-flop. The JK advantage is that there is no change in state when
the J and K inputs are both LOW. Further The basic S-R flip-flop suffers from two basic
switching problems:
U1 1T U2 SN74107 U4 SN74S08
1 3 1 Vg1
* VCVS1 5 J Q &
3
12 2
+ +
Vc:1 4 K Q 2
C
13
U5 SN74S08
- - Vg2
Ic:2
1
3
2 &
+ V1 5
C 100n
-
U3 SN74ALS04
1 2
1
I_fmin_discharge 1.5
Figure 1.32: Adding the inverter, the Flip Flop and AND ports
U4 and U5 act as AND port giving the suitable shape for the two complementary gate
signals
The VCVS adapts the hysteresis comparator output to the JK flip flop input logic. It’s
simply scale the hysteresis comparator output x5 times bigger giving 5V at the flip flop
input.
Step 4 adjust the frequency and the driving amplitude
The dead time is preserved so there is no shoot-through risk between complementary
switches
The flip flop locks the ouput gate amplitude to 3.3V. To guarantee the right amplitude
driving for the Mosfets or Ganfets switches topology two VCVS have been cascaded at
the ouput section (x 1.5 gain for Ganfets and x 3.63 gain for Mosfet assuming to drive
the one at 12V).
67
T 3.50
3.00
Vg1
Vg2
2.50
Voltage (V)
2.00
1.50
1.00
500.00m
0.00
22.00u 27.00u 32.00u 37.00u 42.00u 47.00u
Time (s)
- -
13
U5 SN74S08
- -
Ic:2
1 Vg2
3
2 &
+ V1 5 + +
C 100n
- - -
U3 SN74ALS04 VCVS3 3.63
1 2
1
I_fmin_discharge 1.5
68
Figure 1.35: The output signal amplitude is correct while the frequency is not
We notice both gate signals have a frequency that is exactly equal to the half of what we
were looking for. This issue can be easily explained. Once at the beginning we chose the
frequency boundaries, they actually refer to the capacitor sawtooth voltage not to the
switches. The JK flip flop “splits” the capacitor voltage frequency to create to
complementary signals at half of the initial switching frequency. This issue can be easily
fixed by targeting initial starting frequencies to be two times bigger. So:
𝑓 = 2 𝑥 100𝑘𝐻𝑧
𝑓 = 2 𝑥 300𝑘𝐻𝑧;
69
Note all previous results are valid before applying the x2 factor to adjust frequency
boundaries. Once the x2 factors will be included all results previously calculated scale
accordingly.
Below you can find the Mathcad spreadsheet.
{Input Parameters}
C:=100n
Vcmax:=5
DV:=3
VctrlLOW:=0
VctrlHIGH:=5
fmin:=2*100k
fmax:=2*300k
tdead:=200n
70
Step 6: Testing the model
I_fmin_charge 62.5m
Gain 28.409091m
+
Vfeed 0
-
C
- -
13
U5 SN74S08
- -
Ic:2
1 Vg2
3
2 &
+ V1 5 + +
C 100n
- - -
U3 SN74ALS04 VCVS3 3.63
1 2
1
I_fmin_discharge 1.5
Figure 1.36: If the control voltage is 0V then the minimum frequency 100kHz is correctly got
71
I_fmin_charge 62.5m
Gain 28.409091m
+
Vfeed 5
-
C
- -
13
U5 SN74S08
- -
Ic:2
1 Vg2
3
2 &
+ V1 5 + +
C 100n
- - -
U3 SN74ALS04 VCVS3 3.63
1 2
1
I_fmin_discharge 1.5
Figure 1.37: If the control voltage is 5V then the maximum frequency 300kHz is correctly got
72
1.22 Appendix 4 – Building a LLC hybrid modulator
It is possible to demonstrate by mixing frequency modulation and phase shift modulation the
LLC efficiency is not only higher than the one you get by using frequency modulation alone, but
it stays highly constant overall the full power range. In this appendix a very fast model to
accomplish this task will be presented.
The phase modulation will fight bus variations and the frequency modulation will fight load
variations. We will not go through the LLC power stage design because it’s quite similar to the
one presented before. What we would like to discuss here is its control, quite tricky in this
context.
We are looking for a building block which accepts three inputs
A voltage supply for all the internal logic
A voltage signal which locks the desired frequency
A voltage signal which enable / disable phase shift
We anticipate the tricky part happens once we are called to reach a degree shift near to 180
degrees. We need to be more than sure to avoid shoot through phenomena otherwise we can
break our power stage (switches overstress, magnetics saturation, etc.)
We start by inserting the VCO. We can use both: the discrete model provided in the previous
chapter or the custom block provided by the TINA simulator. The result is identical. We need to
apply a voltage at the input getting a sawtooth wave at the output. Consider also one lesson
learnt from the previous appendix: i.e. target an initial sawtooth wave at twice the frequency
we would like to switch our power transistors.
TINA model gives a bipolar waveform at the output. So to get the unipolar version wave we can
proceed this way: target the output amplitude to be 1V (bipolar); add a 1V offset (getting 0 - 2V
peak); dividing the output by 2. Then set up the VCO by using the following parameters:
Vout = 1 V voltage supply for all the internal logic
Conversion gain (Cg) = 10k / V
Vout offset = 1V
Duty cycle = 0.001
Let’s assume we would switch our FETs at 100kHz. We target the VCO input voltage at:
𝑓 2 ∙ 100𝑘
𝑓 = 𝑉 ∙ 𝑉𝐶𝑂 →𝑉 = = = 20
𝑉𝐶𝑂 10𝑘
The Comp2 generates the clock while the JK – U5 creates the reference square wave. This one
is used as the reference square wave whatever phase shift we are looking for. In particular it
can be used to generate two symmetrical square waves once we are approaching 180 degree
shift or two phase shifted square waves once we are looking for less phase shift than 180deg.
73
Case 1: phase shift > 170°
This case can be obtained if and only if the Comp 1 out is high. In this case the FF D - U4 is
enabled as well as U7 and U14. While the complementary U9 and U10 are switched off. The
chain CS2 , CS1 is then enabled providing the required driving signals. Note they preserve the
dead time imposed by the delay blocks U2 and U3. At the same time CS3 and CS5 simply
compare the symmetrical outputs coming out U7, U14, U9, U10 giving at the output only the
signal couple that is NOT null.
𝟎. 𝟓
𝑽𝒅𝒆𝒍𝒂𝒚 = 𝒔𝒉𝒊𝒇𝒕 ∙
𝟗𝟎
0.5
𝐸𝑥 1: 170° 𝑑𝑒𝑔. 𝑠ℎ𝑖𝑓𝑡 → 𝑉 = 170 ∙ = 0.944 𝑉
90
0.5
𝐸𝑥 2: 10° 𝑑𝑒𝑔. 𝑠ℎ𝑖𝑓𝑡 → 𝑉 = 10 ∙ = 55.5 𝑚𝑉
90
74
VF1:1
3 Vcc 10
CS3 E CS2
22 Q D2
N1 N1
Out(V) N2 Out(V) N2 *
U7 SN74100 U2 100k
Rupp 9.99k
Comp2 Comp
SD1
3
R2 10k
E
VL VH
VF2
C 13
22 Q D2 2 4 U1 0
Q K
U4 SN74100
12
Vout
Comp V+ +
3
3 Q J 1
V- + +
Tri VCO
Gnd
-
R5 900
CS5 E CS1 CS4
22 Q D2 U5 SN74107 Vfreq 21.3704
N1 N1 N1 - -
Rdo 10
Out(V) N2 Out(V) N2 * Out(V)
U14 SN74100 U3 100k VCVS5 0.5
VH VL
3
E
22 Q D2
V-
VoutComp V
U9 SN74100
+ Transient
SD2
R1 10k
3 Comp3 Comp
E
C 13
22 Q D2 2 4
Q K
12
U10 SN74100
3 Q J1
R4 900
U6 SN74107
Rupp2 9.1k
VH VL
Vdelay 500m
2
1
1
Vout
Comp V-
V+
Rdo2 900
U8 SN7404
SD3
R3 10k
Comp1 Comp
Figure 1.38: LLC modulator hybrid model: frequency modulation and phase shift modulation
75
{MAthCad Spreadsheet}
Vcc:=10
fMos:=106.852k
deg:=90 {if phase mod. is used}
fosc:=2*fMos
fosc=[213.704k]
OscGain_Hz_V:=10k
Vfreq:=fosc/OscGain_Hz_V
Vfreq=[21.3704]
Vdelay:=deg*0.5/90
Vdelay=[500m]
T 3.50
0.00
3.50
Vg1
0.00
5.00u 10.00u 15.00u 20.00u
Time (s)
76
T 3.50
90 deg. shift
Vg2
0.00
3.50
Vg1
0.00
5.00u 10.00u 15.00u 20.00u
Time (s)
T 3.50
30 deg. shift
Vg2
0.00
3.50
Vg1
0.00
5.00u 10.00u 15.00u 20.00u
Time (s)
77
CHAPTER 2 – THE DUAL ACTIVE BRIDGE
2.1 Introduction
This chapter opens with a basic question. One that clarifies and re-iterates relevant aspects of
Chapter 1 of A-Z Second Edition, as a refresher, before we feel comfortable enough to head into
more exotic and complex topologies like the phase-modulated full-bridge (PMFB) and the dual
active bridge (DAB). The first question is: why do we need different topologies?
Various answers to that could be as follows, pointing to certain topologies as candidates; in the
process indicating the huge promise that DAB, in particular, offers.
A) Setting an output voltage. For example, step down (Buck), step up (Boost), step-up/down
(Buck-Boost). These are simple inductor-based topologies, with simple DC transfer functions,
(independent of load current, to a first degree, in continuous conduction mode). We could also
invoke the 4-switch Buck-Boost (basically a cascaded Buck and Boost), as discussed in Chapter
9, since that corrects the inherent polarity inversion that is unfortunately part of the basic Buck-
Boost topology. Yes, for all three fundamental topologies, we have a simple direction of control
or correction (“DoC”) too: to increase the output, increase the duty cycle. Can’t go wrong!
78
expense of higher EMI), and hence reduce switching (“V-I-t” crossover) losses, and are
therefore helpful at very high frequencies of around 2MHz and above.
But at the typical low frequencies (~70-150kHz) commonly associated with mains AC power
converters, and those involved in WPT (wireless power transfer), it is preferred to use
topologies such as the LLC topology and the phase-shifted full bridge (PSFB), because these
enable (but not necessarily guarantee), zero-voltage switching (ZVS), and/or zero-current
switching (ZCS). Hence the term “lossless switching” or “soft switching.” Further: if we are doing
soft switching anyway, why would we ever need ultra-fast switching devices?
Note: A lot goes into being able to extract or guarantee soft switching, but fundamentally, for
ZVS for example, we make use of the fact that any actual inductance, or a switched network
which appears “inductive” (i.e. we should be switching to the right side of its resonant peak),
tends to force current to freewheel through the body diode of the switch (or through an external
paralleled diode, preferably). So if we turn that switch ON after a short “deadtime” (typically
100-200ns), we can hopefully ensure that diode conduction has already occurred, in effect,
bringing the voltage across the switch to almost zero when we actually turn it ON, so there is
no V-I crossover term: the “V” has already crossed over, before “I” even starts to move. No “V-I
overlap” in effect. And that ensures ZVS or lossless switching
D) Bidirectionality. A modern emerging trend is to charge batteries from an available energy
source, such as solar power panels (“PV”, i.e. photo-voltaic cells), but under certain conditions,
say during the night, to draw out some of the energy residing in the battery and deliver it back
to the “grid” for example. Hence the search for bidirectional converters and the recent
emergence of the DAB converter. Of course, just changing the direction of flow occurs in a
synchronous Buck too, with a “pre-biased output”, and that can transform the forward-Buck
into a reverse-direction-Boost, as was discussed in Chapter 9 of A-Z Second Edition. But to do
something like that with isolation included, points to the DAB converter.
In fact, both the LLC and the DAB can combine all the above “requirements”: Setting any output
voltage (irrespective of turns ratio), isolation, higher efficiency and bidirectionality. A
bidirectional LLC will pose some other “unanticipated” issues, especially based on the rather
misguided “double tuning” approach used in some WPT implementations today, but it does
hold future promise if done right. It seems a longer way off though. So, for now, only the DAB is
considered within the scope of this chapter.
The DAB converter appears inherently symmetrical, and thus attractive as a likely/promising
candidate for bidirectionality. And it is unique! Remarkably so, to be called a mere derivative,
or combination/composite, of the three “fundamental” topologies, the Buck, the Boost and the
Buck-Boost. It was probably inspired initially by the PSFB when used with a full-bridge
synchronous rectification stage. Perhaps someone out there observed that there were now
eight switches, in effect two full-bridges, positioned on either side of a transformer, so it must
be possible, in principle, to exchange roles: the Secondary could become the Primary, and send
current in a reverse direction. In fact, that is possible with the DAB, but is also just the easiest
part of its design!
The most difficult part of actually designing a DAB, is not in terms of basic control in effecting a
change in the direction of power flow, but even in a given direction, how to design it optimally.
So, first question for the DAB is: do we even understand it as a “topology”?
79
But question at ground zero is: do we even understand why it is stable? To be ever considered
a topology?
As discussed in Chapter 1 of A-Z Second Edition, the essence of any topology is that it is stable!
And without a control loop too! So, under steady conditions, i.e. a steady input and a steady
load, we must automatically arrive at a certain “stable condition”: one which ensures inductor
“reset”. As indicated, this must happen irrespective of any clever control-loop strategy, the
intended purpose of which should only be to add precision to the desired output, not to make
viable any inherently unviable topology!
Unfortunately, as regards to “stable condition”, there is some confusion in related literature as
to what even the related precondition, “inductor reset” means. Reset does not mean that the
inductor current return to zero at the start/end of every switching cycle. That does happen in
discontinuous conduction mode (DCM), quite automatically, in the three fundamental
topologies. But in general, it only means that the start and end inductor current values be
precisely the same: positive, negative, or zero, as the case may be. This is applicable to both CCM
and DCM, or to synchronous converters (with negative currents possible). And thus, each cycle
can become a simple repeat of the previous cycle. Otherwise there would be constant
current/flux “staircasing”, which is potentially destructive, eventually leading to
inductor/transformer saturation and destruction of switches. To avoid that situation is a major
objective, even the areas of PSFB and DAB, but surprisingly insufficiently addressed in
literature.
In hindsight, the reason why the Buck, Boost and Buck-Boost emerged as fundamental
topologies, is what happens at initial power-on, without any user intervention. See Figure 1.11
of A-Z Second Edition for a Buck-Boost as an example. Its output voltage increases
automatically, progressively increasing the magnitude of the down-slope of the inductor
current, which is dependent on VO, to force inductor reset ultimately. Very naturally! Basically,
after a certain number of cycles, depending mainly on the value of the output capacitance and
any load connected there, the down-slope increases to exactly the point where it matches the
up-slope (in terms of magnitudes). The increment and subsequent decrement of current in each
cycle are numerically identical, i.e. |ΔION| = |ΔIOFF|. And that results in a “stable condition”
automatically: i.e. one corresponding to inductor reset (every cycle). It is a repetitive state.
Hence it is a topology. Now, to precisely control the output, we can add active duty cycle control
via a control loop.
As shown in Figure 1.9 of A-Z Second Edition, this also corresponds to the statement that the
net VoltSeconds in any cycle is zero—a result of the simple inductor current law V=L ΔI/Δt.
Because we realize that to arrive at the same “|ΔI|” during ON and OFF durations, we demand
that numerically, VON × tON = VOFF × tOFF. And the reason that this equality can occur in the first
place, is that VOFF increases steadily from its initial value of zero, on account of the increasing
VO, finally settling down to the steady, holding value of VO corresponding to a “stable condition”.
So that is the first thing we need to be assured of, in dealing with any topology, especially the
DAB. Because things get very complicated here. Unlike simpler topologies, the DAB has not just
two obvious segments of applied VoltSeconds (e.g. ON and OFF times), but even in the simplest
80
case that we will discuss here, eight segments, none of which can definitely even be identified
as distinct “power delivery” or “freewheeling/reset” durations.
No simple ON and OFF durations! We can’t even usually say which of these eight segments is
definitely one with an up-slope (of inductor current) and which is a down-slope, leave aside
assuring ourselves that the net increments and decrements of the eight current segments are
always going to add up (signs included) to zero, as demanded in any “stable” condition. And if
that is even assured, or guaranteed, can we at least answer: for what exact output voltage does
that happen? In other words, what is the “simple” DC transfer function of the DAB? If any?
Remember that for the three fundamental topologies, a certain relationship between input and
output voltages always needed to occur, for “reset” to result. So, what if any, is that output voltage,
for a DAB?
A “simple” DC transfer function expression is actually, for good reason, completely missing in
related literature. There is none! Yet this surprising fact seems to have gone fairly unquestioned
or unnoticed. Instead, the DAB seems to have spawned an overwhelming number of
dissertations/theses, which on closer examination were “validated” purely by simulations and
graphs under very specific conditions, rarely any built-to-design converter. All research papers
seem to be focusing on how to control the output, which as we indicated, is helpful when we
are already sure we have a “topology” on hand at least. But now we have exotic DAB control
methods: SPS (single phase control), DPS (double phase control) and even TPS (triple phase
control). All intended to be efforts to tame the powerful DAB, quite obviously. But the DAB, with
so many independent variables, phase combinations, and load combinations to choose from,
seems also to be always stable. Inherently so. For any output setting, any phase combination.
Its inherent stability was however almost assumed as a fait accompli. But in reality, nature
made that come true, through its inherent self-stabilizing tendency, as we had pointed out in
Chapter 1 of A-Z Second Edition too.
Fast forward: we can in fact design the DAB for any output voltage, irrespective of the turns-ratio
effect from the transformer. Fundamentally, it offers Buck-Boost action, but one not so obvious,
or predictable, based on any simple DC transfer function. Hence arises another question: what
really determines the output voltage, if reset is assured irrespective of the output anyway?
We realize that this is not a resonant system where we could carefully tailor the resonant bump
to provide us a step-up action, and use turns ratio to create a step-down action if desired. We
are in effect saying that a DAB can, without taking recourse to resonance, or even turns ratio,
produce both step-up and step-down action on demand, and still be assured that inductor reset
will always occur! Unconditionally. A truly powerful topology if so, necessitating a complete
relearn of classical power conversion on occasion. At least a serious re-clarification of basic
concepts.
So back to some more basics: is there is at least a certain, obvious simple direction of correction
(“DoC”) to the DAB, such as “increase say, x or y or phase difference, to increase the output
voltage”? In all three fundamental topologies we had a simple rule: increase the duty cycle and
thereby increase the output. But in fact, no simple DoC exists for the DAB. The DAB is far more
complicated. It “works”, but even to get our heads around it, requires an astonishing amount of
pre-study, which is what we are embarking on here.
The “underlying topology” of the DAB, is a buck or a boost action, but not the traditional Buck-
Boost topology. It merely steps down or up, that is all we are saying here. As mentioned, we can
81
use transformer turns-ratio on top of that underlying action. But unlike the Flyback, the DAB is
also bidirectional, utilizing very small magnetics: more like an LLC, or a Forward converter with
no output choke of the latter, but possibly with a much smaller, but additional discrete inductor
like we prefer in the PSFB too. And it can all be very efficient too, like a PSFB or an LLC. But
unlike an LLC, it can also be fixed-frequency, for obvious benefits in meeting regulatory limits
for electromagnetic emissions.
Our ultimate mission: we will create a Mathcad spreadsheet to match simulations using
SIMPLIS. We thus create a “kernel”, and then, using our powerful scaling laws, we will scale
that to any power and frequency level, not to mention any input voltage level too. In just a few
lines.
From Figure 2.1, we see that we have a Primary side, with four FETs, labelled Q1, Q2
(corresponding to switching node “a”, or “Vsw_a”), and Q3, Q4 (switching node “b” or “Vsw_b”).
Note that all odd-numbers, Q1 and Q3 here, are high-side FETs of their respective half-bridges.
Similarly, on the Secondary side we have four FETs, labelled Q5, Q6 (switching node “c” or
“Vsw_c”) and Q7, Q8, (switching node “d” or “Vsw_d”), with Q5 and Q7 being the respective
high-side FETs of the two Secondary-side half-bridges. Note the dots on the transformer
windings, so in effect, the half-bridge consisting of Q5 and Q6 is the “cousin” or “next-door
neighbor” of the half-bridge consisting of Q1 and Q2. The phase lag between the Primary and
Secondary sides, is based on the lag between Q1 (master) and Q5 (its cousin).
So, the high-side FET of the half-bridge consisting of Q1 and Q2, i.e. Q1, is the designated
“master” always, and in fact all the other half-bridges are set up to lag this, as shown in Figure
2.2. The lag between the two Primary-side half-bridges (measured between the high-side FETs
Q 1 and Q3) is D1 × π (in radians), and since D1 can be maximum 1, the maximum shift is π
radians or 180°. D1 thus corresponds to what we may refer alternately to as “Angle 1”, or “intra-
bridge” phase difference, or “Primary lag”.
D1 = 1 (i.e. 180°) corresponds to maximum power capability from the Primary Full-bridge,
since whenever Q1 turns ON, Q3 is OFF, which means Q4 is ON whenever Q1 is ON, and full
power is possible (not necessarily delivered). Similarly, when Q1 turns OFF, i.e. Q2 turns ON, it
finds Q3, ON, for the entire duration too, and now current flows through the full-bridge network
in a reverse direction, doing its bit to try and ensure inductor/transformer reset. Whether that
happens or not is an entirely different matter, as we will explain.
Similarly, the lag between Q1 and Q5 is D2 × π, in radians. D2 corresponds to what we may refer
alternately to as “Angle 2”, or “intra-bridge” phase difference, or “Secondary lag”. Like D1, D2
too can be a maximum of 1, so the Secondary essentially lags the Primary by a maximum of
180°. Indeed, it can be shown that under these conditions, power flows from Primary to
Secondary. But if D2 is “greater than 1” (i.e. angle greater than 180°), or alternatively expressed,
“is negative” (-180° to 0, which is actually the same thing as 180° to 360°), in effect we can think
of Q3 as leading (not lagging) Q1. The power direction will then simply reverse. But if D2 is
between 0 to 1, the direction is always Primary to Secondary.
82
Figure 2.1: The Dual Active Bridge under study
83
To keep the number of variables limited, we are implicitly also assuming that the lag between
Q1 and Q3, i.e. D1, is the same as the lag between Q5 and Q7. That means both full-bridges have
the same inherent phase-shift. So, the “intra-bridge phases” are the same. But, measuring the
lag of Q7 with respect to the master, Q1, we get (D1 + D2) × π.
We mentioned that when the Q1 and Q3 are offset by π (i.e. 180°), the Full-bridge is “capable”
of delivering maximum power. But to actually get maximum power into the load, it can be
shown that the Secondary has to have a precise “Secondary lag” of 90°. If the Secondary lag is
less or greater than 90°, that will reduce the power further. See Figure 2.3. It can also be shown
that if for example, D1 is such that the lag on the Primary side is less than 90° (D < 0.5), say 45°,
the curves start flattening out. Now the Secondary side delivers maximum power when its lag
equals the Primary lag (D1 = D2). So, if the Primary lag is 45° (D1 = 0.25), maximum power is
obtained only when the Secondary lag is also 45° (D2 = 0.25). If the Secondary lag is made to
vary from 45° to 135°, the power curve remains flat, at the value it had at 90°! In fact, if the
Primary lag is reduced to say, 22.5° (D1 = 0.125), maximum power is obtained only when
Secondary lag is also at 22.5 (i.e. D2 = 0.125). The power curve would now be flat over the entire
range of Secondary lag from 22.5° to 157.5”. The curves for 22.5° are not presented in Figure
2.3, since they are really of no use. But it is clear that all power curves are always symmetrical
around Angle 2 = 90° (D2 – 0.5), and maximum power occurs exactly where Angle 1 = Angle 2
(i.e. D1 = D2).
What else does this imply? The answer is: there is no point in trying to set D2 greater than D1
ever. The power capability is specified by D1, but it can be controlled by D2. We get nothing by
making D2 exceed D1. If the Primary is limited by a low value of D1, there is nothing we can do
on the Secondary side to somehow “extract” more power. Keeping this in mind, our entire
assumption from now on is: D1 > D2.
In Figure 2.2, we also reveal that there are two sub-cases within the broader category of D1 >
D2. Case 1: D1 + D2 > 1, is accompanied by a “small edge” appearing in the Vsw_d waveform
as shown. The other is Case 2: D1 + D2 < 1. The two cases produce different applied voltage
waveforms and different time segments too, as we will see. The two cases need to be studied
separately, though if the math is right, both cases must converge: give identical results for the
case D1 + D2 = 1! That serves as a good cross-check point.
In the DAB schematic of Figure 2.1, we have shown an “ideal/DC-DC transformer”, available in
most simulator packages. It has a theoretical inductance of infinity, by definition. Which,
indirectly, seems to imply that, using the inductor equation V=L ΔI/Δt, “no ΔI” can be injected
into, say, the Primary winding, of this ideal transformer, and thus no current, “I”, is possible
either (because that would need a non-zero ΔI even to start things off!). Indeed, there is no
current, at least not in the (infinite) “L” of the Primary winding of an ideal transformer. At least
when there is no load present. But wait a minute: isn’t there an actual measurable current
flowing into the winding of any real transformer? With or without load present? Yes, there is
an underlying “excitation/magnetization current” that every real transformer needs, just to
start things off. More on this shortly, but to complete our transformer “model”, we definitely
need to connect a “magnetizing inductance” “Lmag” across it, as shown in Figure 2.1,
representing the actual measured “Primary inductance” of the actual/real transformer.
84
The reason for separating the constituents out is quite real: the magnetization current, though
an enabler, does not itself participate in the “transformer action”, which occurs (only) in the
“ideal transformer” shown below Lmag.
85
That “no-load” current we measure, is the one that flows through Lmag: it is always flowing,
irrespective of load. So, how do things change when there is a load present? Does that mean
that we somehow manage to push current into the “infinite inductance” of the Primary winding
of the ideal transformer? In a sense! But to get a good mental picture of how to understand all
this, and thereby understand “transformer action” too, we look at Figure 2.4 now.
Figure 2.4: Understanding the transformer action and simplification for DAB
We start with schematic “A” where we first have an arbitrary Full-bridge driving a real
transformer. So, we are realizing that the current flowing into the Primary winding of this real
transformer, actually has two components in general, not so obvious from this schematic. One
is a load-independent current, corresponding to the current flowing in Lmag, shown in “B”, as
in Figure 2.1. This corresponds to the magnetization (“circulating current”) component in the
Primary-side, not associated directly with power delivery. The other “invisible” current
component is the one that is actually a participant in the “transformer action”. It is the load-
dependent portion. In effect, it flows, not through the “infinite inductance” of the winding, but
through the “load” which appears across the Secondary winding of our ideal transformer. If the
load is not present (very high load resistor for example), the load-dependent current
component falls to zero. At which point the residual current still present in the Primary
winding, is the circulating “magnetizing” current component, and that continues.
86
So, what really happens is that the infinite inductance of the ideal transformer, goes out of the
picture, because the flux created by the Secondary-side (scaled) current, a result of Faraday’s law
of induction, exactly cancels all the flux created by the load-dependent Primary-side current
component. All this occurring within this ideal transformer model. Distinct to what happens in
Lmag, even though physically, both are part of the same (actual) real transformer!
Note that no Secondary-side current can ever cancel the flux in the core arising due to Imag,
though that is also flowing within the same real transformer. Imag is just an enabler, not a
participant.
So, in effect the infinite inductance of the ideal transformer goes out of the picture, and this is
best visualized in “C” within Figure 2.4. Here we have removed the ideal transformer entirely,
and replaced it with the “reflected” load, or the “equivalent load” as seen by the Primary side.
After this process of reflection, we can totally forget about the Primary-side (infinite)
inductance of our ideal transformer. The entire transformer, with its inherent property of
voltage and current scaling, is gone entirely now, being replaced by appropriately reflected
impedances placed in parallel to the (still-remaining) magnetizing inductance.
All that takes us finally to “D” in Figure 2.4, where we have removed the infinite inductance,
and so it is clear that all the current, except for Imag, heads straight to the load. And as indicated,
we may also choose to (but perhaps shouldn’t always) ignore the steady circulating current
component associated with the actual, Primary-side inductance, Lmag, of the real transformer
we started off with. That current component has ramifications as we will discuss, besides the
obvious losses associated with this “circulating” current component.
Note: In reflecting impedances, from the Secondary side to the Primary side, we should
remember always, that, based on energy invariance in the process of reflection, and since Is →
Is/n and Vs → Vs × n, we get Cs → Cs/n2, L → Ls × n2, and Rs → R × n2, where n = NP/NS. Simply
because ½ × (CV2), ½ (LI2) and I2R must all be preserved during reflection, despite the current
and voltage scaling that occurs. And that is how we can scale anything from the Secondary side
to its Primary side equivalent, with no intervening (ideal) transformer necessary anymore. The
magnetizing inductance cannot be wished away.
So, we realize that from a high level, “transformer action” is a basic process via which: voltage
scales as per turns ratio (Vp/Vs = n), and current, too, scales similarly, but in an opposite
manner (Is/Ip = n). That “opposite” behavior incidentally, keeps the product “VI” unchanged on
both sides of the transformer, as we should expect, since “VI” is energy/Watts, and that too
cannot just appear or disappear, despite any tricks or reflection artifices of ours! Or our models
are inaccurate and do not reflect nature.
The key difference between voltage and current scaling is clearly a bit more subtle. The voltage
present across the ideal transformer Primary winding, and across Lmag is the same, so there
was never any confusion as to what exact Primary-side voltage scales as per turns ratio. But
most people get easily confused when it comes to understanding how currents scale. Basically,
the entire current flowing into the Primary winding of a real transformer (“ITOTAL”) does NOT
scale as per turns ratio! Only the load-dependent portion of that does. Basically, we have to
remove Imag from ITOTAL, and that is the current component which scales via transformer
action. And finally, it is that scaled version which heads to the load and delivers output power.
Whereas, the current associated with the magnetizing inductance, “Imag”, simply goes back and
forth every cycle and represents the circulating current component!
87
It is not associated (directly) with any power delivered to the load, but is the enabler. Indeed,
all the energy associated with Imag is theoretically fully recoverable. In practice, no recovery
process is perfect, and there are associated losses.
Comparing to classical, transformer-based topologies, refer to Figure 3.6 of A-Z Second Edition,
showing a Forward converter. Note that the “magnetization current” component is just the
shaded triangle riding on top of the reflected trapezoidal current waveform being reflected
from the effective Buck stage consisting of the output choke, present on the Secondary-side.
And that triangle is not related to the Secondary side current waveform by any scaling law. Its
associated energy is recovered every cycle, by the energy-recovery (“tertiary”) winding of the
Forward converter. Since we usually set the tertiary winding to have the same number of turns
as the Primary winding, but connected in such a way that it conducts only during the OFF-time,
in effect we get the same voltage appearing across the tertiary winding during the OFF-time, as
the Primary winding had across it during the ON-time. So from the basic inductor equation V=L
ΔI/Δt, we realize that the slope of the magnetization current during the ON-time is equal but
opposite in sign to the slope of the recovery current during the OFF-time (albeit in different
windings, but on the same core). So, to ensure “reset” we just have to ensure that we leave
slightly more time for the magnetization current component to return to zero, than the time we
gave it to ramp up in. Indeed, the ON and OFF time magnetization currents are in separate
windings, but still on the same core, and thus can legitimately juggle the associated energy. And
that is the reason why a standard Forward converter is always operated at a duty cycle slightly
less than 50%, say 48%. That simple time-division approach ensures that the recovery current
reaches zero, literally just in time, and stays there a wee bit longer, till the next cycle starts
again. Inductor reset occurs every cycle. Note that the load-dependent Primary-side current
component adds zero flux/energy to the core, because as we mentioned, the Secondary-side
current produces flux which fully cancels the flux produced by that load-dependent Primary-
side current component. That is why a Forward converter transformer core is so small, and
usually left un-gapped too. The flux in it is always fixed. It does not depend on load current. So,
if the size of the core is increased visibly for “higher power”, that is only to create extra window
area to accommodate the thicker copper which comes naturally with higher load currents.
In contrast, a Flyback “transformer” doesn’t have “transformer action” per se. By energy
conservation principles, we do get voltage and current scaling on either side too, but since the
Secondary winding does not conduct when the Primary winding is conducting, there is no “flux
cancellation” effect either, which is the very basis of “transformer action”. We can’t even talk in
terms of a separate magnetization component and a separate load-dependent current
component here. It is all just one current component. A Flyback transformer is best looked at
as a multi-winding inductor, not as a transformer, despite exhibiting pseudo “transformer
action.” Another way to look at it is: it is all just magnetization current, and the Secondary
winding is simply an “energy recovery” winding for that stored energy, except that unlike a
Forward converter in which the “recovered” energy is delivered back to its input
supply/capacitor, in a Flyback, the magnetization energy is dumped into an entirely different
capacitor, the output capacitor in this case!
Why did we get into so much detail to recollect transformer action here? To point out that
specifically, only inductors store energy, not “ideal transformers”. Even the inductors which are
inherently part of a real transformer.
88
And for inductors, all inductors present, but only inductors, not for ideal transformers, we need
to ensure that “reset” occurs every cycle. For it to be a viable topology.
In an ideal transformer, complete flux cancellation occurs at any load. So, there is no associated
stored energy, ever. Nothing to “reset”. It is pure transformer action. Any stored energy resides
only in the associated magnetizing inductance. Though indeed, physically, that is part of the
same physical (real) “transformer” that we are modeling. Which we now realize, is part
inductor, part transformer! And we have to be very careful that we reset the magnetizing
inductance part of it.
The same is typically true for the leakage inductance associated with a real transformer too.
Though leakage is usually the result of flux straying through air, so it is in effect an air-gapped
core structure, and we do not have to worry about saturating that leakage. On the other hand,
in the PSFB and DAB, we usually prefer to use a separate, more predictable, “shim inductor”,
instead of the leakage from within the transformer, as shown in Figure 2.1. In that case, we
have to be sure we reset the external/“leakage” inductor too.
In general, if we fail to ensure all involved inductors of a power converter reset every cycle, we
will arrive at the phenomenon of “flux-staircasing”, leading eventually to core saturation and
destruction of switches.
In a DAB, the demand for inductor reset, both for Lmag and Llkg, seems difficult to ensure, or
at least confirm to ourselves. First, because the way these topologies are constructed, with FETs
in parallel to diodes, we really don’t have the ability of using an “energy recovery” winding. But
luckily, the two half-bridges of a Full-bridge alternate in action, and that can cause the
magnetization current to ramp up, and then ramp down too, to achieve reset (though not in two
distinct ON and OFF periods).
In any transformer-based converter, we always end up applying certain reflected voltages from
the Secondary side to the Primary side. In fact, in a DAB, we will show that there are eight
distinct segments per cycle, not just two (the ON and OFF times in a standard Forward
converter). Those voltages are typically flat-topped, since the Secondary side gets clamped to
VO or VIN, or some combination of the two. In turn these flat-topped voltage segments, “allow”
certain rising or falling current slopes. These obey ΔI = (V × Δt)/L, but for different V’s, as
applicable. Yes, as the Full-bridge alternates between its two half-bridges, we can get various
increments followed by decrements of current. To prevent “staircasing” over several switching
cycles, we want to ensure that all these delta I’s, with signs included, add up to zero every cycle.
Just as automatically occurs in standard inductor-based topologies. As mentioned, the
magnetization current does not need to “return to zero”, as in a Forward converter. We can
have positive or negative currents in this topology. The challenge therefore is: with all these
variabilities, we still need to assure ourselves that the magnetization current component
returns to the exact value it started the cycle off with (positive, negative or zero, as the case may
be), every time, every cycle, for any phase combination or load combination. Even perhaps for
any output voltage, as in a DAB. That would finally ensure “reset”, and no “staircasing” and we
would truly have a topology on hand.
There is another huge hint/lesson for us, from voltage-mode controlled full-bridge topologies,
where if we were just depending on the associated time intervals of the rising and falling
currents to be equal, in order to indirectly equalize say, |ΔImagON| and |ΔImagOFF|, and thus
return the current at the end of every cycle to its starting value, we could be in big trouble.
89
Slight inequalities, especially in timing, could cause flux-staircasing and eventual destruction of
the switches. Same situation as a Forward converter inadvertently operated at, say, 51% duty
cycle! You never heard about it, because it blew up right away.
For these reasons, it usually becomes imperative in Full-bridges, including the PSFB and DAB,
to incorporate a large-value DC blocking capacitor, “C_DCBLOCK” as shown in Figure 2.1.
Especially with voltage mode control. The purpose of such a large-capacitance value would be
to do nothing to the high-frequency components of the switching current. Or even to the overall
theoretical analysis. In practice, it would serve to merely absorb any slight inequalities, say
those arising from slight differences in timings of the right and left side half-bridges. The DC
blocking capacitor would take upon itself a slight DC voltage offset, thereby automatically re-
adjusting the applied volts, to correct any inherent VoltSecond imbalance.
Of course, we could similarly add a DC blocking capacitor to the Secondary side too, in series
with the Secondary winding. It would maintain symmetry between the two Full-bridges, which
would help in preventing staircasing when the power direction reverses, and the Secondary
side becomes the new Primary side. And neither DC blocking capacitor would do anything at all
to the overall expected performance, unless there was some inherent mismatch between any
two half-bridges of any Full-bridge. In which case, it would delicately step in, by just the right
amount, to ensure the desired/anticipated performance once again. So, it is more like a safety
net. But a necessary one!
The underlying theory and explanation of the DAB and the PSFB however, does not involve, or
need to involve, the DC blocking capacitor. We too will therefore ignore it in our analysis too as
in the simplified DAB schematic in Figure 2.5. But with a strong underlying recommendation to
not forget the DC blocking capacitor in an actual build.
Now, we need to explain the reason for the presence of the critical component “Llkg”, or the
leakage inductance, in Figure 2.1. Yes, we will need to ensure that it resets firmly and
unconditionally too, just as Lmag must. But with a DC blocking capacitor present, we would be
receiving great assistance in the matter. Though still on our “to do” list: we have to prove that
the DAB is “inherently stable”, and thus, the DC blocking capacitor actually stands a chance of
correcting any unintended asymmetries between the two halves of the Full-bridges. Because even
the DC blocking capacitor can’t make any inherently unviable topology, viable. Just as no control
loop can either.
In Figure 2.5, by eliminating the transformer, we have the reflected output rail “VOR” (= VO × n).
The load across VOR is actually the reflected load, equal to RLOAD × n2, where n = NP/NS.
The first thing we see from the simplified schematic in Figure 2.5, is that there would be
literally nothing standing between the input source and the output capacitors, to regulate the
voltage, or smooth it out, were it not for “Llkg”. We would get a direct input-to-output “short”,
accompanied by spikes of very high current and eventual destruction. That too is not a viable
“switching topology”! That is why in a Forward converter, we always need an output choke!
Same as the PSFB in Figure 2.6.
90
Figure 2.5: The DAB schematic simplified
Some early hobbyists in high-frequency switching power instinctively tried to use the simple
“50-Hz” iron-transformer “principle”: i.e. just apply turns ratio effect for attaining the desired
voltage. Maybe add this new “duty cycle control” to tailor it precisely. What they didn’t realize
is it was now misapplied to an inadvertently well-coupled (low-leakage) high-frequency ferrite
transformer, and with no input/output smoothening choke present to save the day! They all
learned the hard way that what they had come up with was just a direct short from input to
output, with no intervening parasitics like the leakage found naturally in the early iron-core 50-
Hz transformers.
In the DAB too, we thus have good reason to introduce Llkg. For one, to prevent the “direct
input-output short” effect. But to hopefully also produce smoothing and voltage regulation (full
output control). In a PSFB, we use an output choke, as in a Forward converter, and also Lkg. The
purpose of Llkg is, however, only to create soft switching, or ZVS by coaxing current to
freewheel momentarily through the body diode of a FET during the deadtime, as discussed
earlier. In a DAB, by using a relatively higher inductance for Llkg, we hope to acquire control of
the output and also introduce soft-switching.
91
Figure 2.6: The Phase-shifted Full-bridge
So, the Llkg in a DAB, is basically one component for two functions. But the only way we can get
it to work as we hope, is by no longer just switching the Full-bridge on the Secondary side
synchronously (i.e. to aid diode conduction), as we do in the PSFB, but to actively switch the
FETs, with phase control perhaps. And reverse the phases to reverse direction of power flow
too. That is how the DAB really got off the ground.
In Figure 2.7, we have for convenience, placed the entire Primary side to the left, and the entire
Secondary side to the right. Note that the input DC voltage source feeds both the Primary-side
half-bridges, though the actual wired connection may be implied, not drawn explicitly.
Similarly, both the Secondary-side half-bridges are connected to the output capacitor and load
(though the connections may be implied, not drawn).
92
Figure 2.7: Examples of how the DAB works (Part 1)
93
After getting rid of the ideal transformer, by replacing the components with their appropriate
reflected values (Rload × n2 and VOR = VO × n2), and consciously deciding to ignore Lmag (with
caveats), and any recommended DC blocking capacitors too, we see that the (only) intervening
component left between the two sides is the leakage inductance Llkg. It will determine
everything. Note that Llkg can be placed between the switching nodes Vsw_a and Vsw_c as
shown, or between Vsw_b and Vsw_d. The results would be unchanged, since there are two
separate grounds, the Primary and Secondary grounds, which are not connected to each other!
The two sides are separate, except for Llkg. We can thus place Llkg anywhere in between the
two full-bridges, as one component, or even distributed in any manner. All we need to know is
the sum of all the intervening leakage inductances.
We are ultimately interested in finding an easy way to predict/know the voltage differential
that appears across “Llkg”, because that is the actual voltage which determines the slope of the
current during any particular time segment. Note that it also doesn’t matter how that voltage
differential is referenced, with respect to any of the fixed voltage rails (including grounds).
Because only the differential voltage across Llkg counts. But it can certainly get hard to visualize.
Because, back to physics, whenever we talk of a “voltage”, or “potential”, that itself is a relative
concept. There is always an implied reference value against which we are stating any voltage.
Usually it is the ground, but here we have two grounds, and they have no fixed, established
relationship to each other either! So, what is that “reference value” here, for talking about the
voltage at any given point?
The good news is that though the grounds are “separate”, they do get referenced to each other
in some fashion. Always. The way we are applying phase relationships, instead of, say, duty
cycle, there is never any instant when all the FETs on the Secondary side are OFF, or all the FETs
on Primary side are OFF. That can never happen in phase-controlled systems. So, in all cases,
on both the Primary and Secondary sides, at least one FET is definitely ON at any given instant
on any switching node (excluding deadtimes), and that serves to connect all four switching
nodes (actively) to one or the other of their related DC rails. In effect, there is always a definable
voltage relationship between the Primary ground and the Secondary ground too, though that
relationship keeps changing as the switches transition between ON and OFF states. Yet, purely
in terms of voltage differentials, we can definitely state that there is always a definable voltage
across Llkg, and thus continuity of current is maintained at all times between Primary and
Secondary full-bridges via Llkg.
In other words, the reference values of voltages do not really matter, only the differential across
Llkg does. But to get to that, we need to first get a mental picture of how current flows through
the system, and then come up with a very simple artifice to know exactly what Vlkg is. That is
how we proceed here.
Back to Figure 2.7, in each half-bridge, to avoid some clutter, we have omitted the reference
designators of the low-side FETs, but they are obvious and understood, based on Figure 2.5.
So, we have Q2, Q4, Q6 and Q8 directly below Q1, Q3, Q5 and Q7 respectively. We start with
schematic “A”, at a certain switch configuration, marked “now” on the Primary side. It
corresponds to Vsw_a pulled high to VIN, (or simply: “Va is Hi”). Now, the input source,
connected to the VIN rail through Q1 which is ON, attempts to push current through Llkg.
Assuming the current is flowing, after it goes through Llkg and arrives at Vsw_c, it finds that
particular node is pulled low, because Q6 is ON. So, it heads towards Secondary Ground.
94
Hypothetically, the current could have instead tried to push its way at Vsw_c through the diode
placed across Q5 (even though it is OFF), and that would have taken it through the output
capacitor and the load, over to the lower side of Q8, from where it could have returned to Vsw_d,
either through Q8, because it is ON, or it could have even gone through the diode across Q8
were it OFF. But luckily, current always chooses the easiest/shortest path, and so the current
prefers to go from Vsw_c, straight to the Secondary ground through Q6, and immediately return
to Vsw_d through Q8 (which is ON) and from there on to Vsw_b. It is a circulating current, as it
has bypassed the load entirely. From Vsw_b, since Q4 is ON, it returns to the Primary ground.
And thereby completes the full current path through the input DC source V IN.
So, it is fair to conclude that the input DC source, V IN, is pushing current, but that current does
NOT reach the load in this particular configuration, simply circulating back through the
Secondary ground and returning. But even that aspect is of secondary interest to us here. We
are primarily interested in working out the voltage across Llkg. For that we need to identify the
voltages that appeared on the left and right sides of Llkg! But the question still is: with respect
to which rail? The Primary ground maybe? OK, with that reference level assumed, we applied
VIN to the left of Llkg through Q1 being ON, and then 0V got simultaneously applied to the right
side of Llkg, through Q8, Q6 and Q4 (all being ON).
A more general way of looking at this emerges as follows.
Vlkg Vp Vs
where
Vp Va Vb
and
Vs Vc Vd
In other words, stick to differences in voltages all through. First find the difference between Va
and Vb. That difference is called “Vp” for Primary-side applied voltage. Then find the difference
between Vc and Vd. Call it “Vs” for Secondary-side applied voltage. Then the difference of Vp
and Vs gives us Vlkg.
Let us confirm this gave us the correct result in our above example. We had Va = V IN, Vb = 0, so
Vp was VIN – 0 = VIN. Similarly, Vc = 0, Vd = 0, so Vs = 0 – 0 = 0. So Vlkg = V IN – 0 = VIN. Which is
exactly what we got by tracing the current loop all the way through.
We can do the same for schematic “B” in Figure 2.7 and also for “C” and “D” in Figure 2.8. The
math validating our simple difference method is provided within each schematic.
95
Figure 2.8: Examples of how the DAB works (Part 2)
96
It confirms that thinking in terms of Vp and Vs is a good way of knowing what the voltage across
Llkg is. Then we can ignore the complexity of how the Primary and Secondary circuits get
connected to each other at any given instant.
So finally, in Figure 2.9, we show that we can think in terms of a single pole, triple throw (SP3T)
switch on either side of Llkg. On the left (Primary) side it chooses between V IN, -VIN and 0 V. As
indicated, we get VIN as Vp, if Q1 is ON, and Q3 is OFF (Q4 is thus ON). We get -VIN, if Q1 is OFF,
but Q3 is ON. If the switching nodes “a” and “b” are both high, or both low, then Vp is zero (the
“flat”regions between the mountains and valleys of Vp). Similarly, on the right side of Lkg, we
can get VOR, -VOR and 0. And thereafter, Vp -Vs is the difference voltage across Lkg, from which
we can calculate the current segments.
In Figure 2.10 and Figure 2.11, on the basis of careful simulations matching our emerging
Mathcad spreadsheet, we can quickly confirm our “SP3T”-based method for calculating Vlkg.
We have further tabulated the durations of all the eight segments based on selected D1 and D2,
and the corresponding Vlkg’s in each of the eight segments. We have two cases as mentioned
earlier: Case 1 corresponding to D1 + D2 > 1, and Case 2 for D1 + D2 < 1. Which case is on hand
is quite obvious by looking at the waveform shape on “Vsw_d”, and comparing it to the “master”
node “Vsw_a”. So if at the moment Vsw_a goes high, it finds that Vsw_d is also high, we have
Case 1. Otherwise it is Case 2.
For convenience, we have compiled all the durations and applied voltages for both cases, in
Figure 2.12. In that we also present a Mathcad “proof” that if we take the duration of each
segment multiplied by the applied voltage during that segment (essentially the voltseconds for
that segment), and sum over all the eight segments, for any D1 and D2, we do get the sum over
any cycle as zero. This is very providential, as it implies that the fundamental voltseconds law
is going to be upheld without user intervention, and so “inductor reset” is always guaranteed in
the DAB. It is thus “inherently stable” as we had hoped, and qualifies to be taken seriously as a
new, fundamental topology. Note that we made no assumptions about the relationship between
VIN and VOR either, so the voltseconds law will be upheld for any output gain (Gain = V OR/VIN).
Of course, to be assured that small timing asymmetries don’t throw all this into doubt, we still
strongly recommend the DC blocking capacitor discussed earlier.
Now we know that the inductor current always reaches exactly the same value that it started
the cycle off with. Because we now know how to calculate all of the delta-I’s (increments and
decrements), based on applied voltages, and we thus determined that over a full cycle, the net
increment in I, happens to exactly equal the net decrement. But we still don’t know the absolute
value of the current at the beginning and end of every cycle. What exactly is that value? How do
we figure it out? For one, that critical number determines the output voltage we can get for a
give load, as we will soon learn. That determines how much of the current circulates, and how
much gets delivered to the load.
97
Figure 2.9:How the voltage segments cause changes in slope of current (in principle)
98
Figure 2.10: The eight segments of Case 1
99
Figure 2.11: The eight segments of Case 2
100
Figure 2.12: Compiled table for the eight segments, both cases, and a Mathcad “proof” that sum
of voltseconds over a full cycle is unconditionally zero
We will take a peek at the emerging Mathcad spreadsheet here, which matches simulations so
well. One of the hints we had for discovering the exact shape of the leakage inductor current
waveform was in the initial observation from simulations, that the presence of a large DC
blocking capacitor in the Primary and/or Secondary paths, did nothing to the performance of
the converter. Which meant that the average DC value of the inductor current had to be zero.
Yes, that can always be confirmed. Think: the DC blocking capacitor will in effect subtract any
DC value present. So clearly there was no DC value to start with (of course assuming no minor
asymmetries in FET timings), or the actual insertion of the DC blocking capacitor would have
changed everything. It changed nothing.
101
In other words, the inductor current must have a zero DC value to start with: equal areas above
and below the horizontal zero axis. That makes sense since an ideal transformer by definition,
only responds to AC, by Faraday’s law of Induction, not to DC. We can conclude that both the
Primary-side and Secondary-side current waveforms in Figure 2.1, must always have an
average value of zero, since they are pure AC waveforms going into and out of an ideal
transformer. Yes, even if there is load present! No DC. Because in any typical Secondary side of
a transformer-based converter, the very purpose of the rectification diodes is to take the AC
current waveform (with no existing DC value at that point), then rectify it (and that gives it a
DC value). Then we push that DC current through the load, and deliver power. There is no
contradiction or confusion.
As an example, let us show how the Mathcad spreadsheet worked out the actual inductor
current waveform. Let us use the phase angles we had for matching against Figure 2.11 i.e.
Angle 1 and Angle 2 being 70° and 50° respectively. We have also used a V IN value of 400 V and
a VOR value of 275 V in the spreadsheet, matching Figure 2.11 (the Mathcad suggested value of
the load resistor for achieving that VOR was 100Ω, what we had used in the simulation, so it all
matches).
First step: We generate and accumulate all the current increments/decrements suggested by
the applied voltages in the 8 segments, but we arbitrarily start from zero current. As expected,
and hoped for, the last segment returns the current back to zero current, the starting value. So,
inductor reset is assured. But that does not mean it is the actual inductor current shape!
Because that current must have an average (DC) value of zero. And this one doesn’t. So, the
trick we use in our Mathcad spreadsheet is to calculate the value of the current at 0°, and at
180°. Call it the “offset”. We now move the current waveform vertically by half the offset value,
to reposition the current waveform, such that the currents at 0° and at 180°, are equal and
opposite in sign. Since we expect the overall current waveform between 0 and 360° to be
symmetrical around 180°, this little offset trick indirectly assures us that we will get equal areas
of current above and below the zero-current horizontal axis, i.e. with net zero DC value. See
Figure 2.13.
As we can see, Figure 2.13 also indicates an almost perfect match between the Mathcad
spreadsheet and the SIMPLIS simulation. But to get to know the overall power capability of this
converter, we need to calculate the DC component of the input current next.
102
Figure 2.13:Mathcad and SIMPLIS comparison for Case 2 [i.e. (D1 + D2 <1)]
2.8 Input DC current component and Power Capability (Scaling Laws too)
This is the critical part. It determines how much actual useful energy is being pulled into the
converter every cycle, for delivery to the output. We need to separate the actual DC current
component from the inductor current. So, the basic rules for that are actually very simple, and
the same for Case 1 and Case 2. Here are the steps:
a) Keep in mind that DC current can only be drawn every half cycle, in the first three of the
4 segments. Not for the entire duration.
b) We may be starting the cycle (i.e. Q1 turning ON), with a negative current, quite like what
we see in Figure 2.13 after we apply the correct offset, as explained above. This
represents a certain charge in the input capacitors, which is part of the circulating
component. And because it is circulating current, we need to keep it “circulating.” In
other words, even though at the start of the cycle, the input voltage source is “available,”
it is not asked to draw any current till full charge equalization takes place, as indicated
by the two small triangular shaded areas in Figure 2.13. Until charge balance (basically
zeroing the current integrated over time), takes place, no DC current is drawn from the
input source. But once charge balance is completed (for the circulating component), the
input source then provides the current for “making up” the rest of the inductor current
waveform, which in turn was created simply by the applied voltages during the
segments.
103
Once we know the average of the DC input current, then input power is very simply just the
input voltage, multiplied by the average DC input current, and assuming perfect efficiency, that
is the power which will be delivered to the output. In other words, this simple equality will tell
us the power capability of the said converter, for the assumed or targeted V OR (and thus VO):
VO2
VIN IIN _ DC _ AVG PIN PO
R LOAD
And that clears up the mystery of how much power we can get for a desired output. Clearly, to
increase the power, say two times, if we just halve the leakage inductance, the slopes of all the
current segments will double and in effect, IIN_DC_AVG will also double, and we will get twice the
power! Another vindication of our scaling laws. Similarly, if instead of asking for twice the
power, after halving the inductance, we double the frequency, once again the increments of
current will be back to what they originally were, and so would IIN_DC_AVG. So, we would just get
the original power. Summing up:
To double the power at a certain frequency (keeping all else unchanged such as the
phase angles), we need to halve the inductance.
To double the frequency, keeping to the same power, once again we just need to halve
the inductance.
As before, if we double the input voltage, allowing the output to rise proportionally too,
we will quadruple the power.
Now, how did we actually “see” the DC input current component in simulations? All voltage
sources in simulators and usually on the bench too, are fully capable of not just sourcing
current, but sinking it too. But that doesn’t easily tell us the net DC power coming from such a
source. Therefore, we put in a series diode, and a relatively small bulk capacitor to only provide
the AC component. A capacitor automatically charges up or discharges, if charge balance is not
maintained, and that helps reverse-bias or forward-bias the series diode at the right moment,
to provide DC input current component to the converter. See the SIMPLIS schematic we used,
in Figure 2.14. Notice that we have a DC blocking capacitor, because even the simulator often
misbehaves without it.
104
Figure 2.14: The SIMPLIS simulator used, showing how to extract the input DC current
component
With our new-found confidence, let us use our scaling laws, combined with our basic power
delivery curves in Figure 2.3, to do a quick design. We will weigh the merits of the two design
options presented here, a bit later. But both cater amenably to the following requirement (quite
arbitrary).
Design a DAB which to deliver 7 kW from a 48V output rail. The rectified input rail is 380V. We
want it to operate at 100 kHz. This will become bidirectional automatically, if we simply change
the sign of “Angle 1”, the “inter phase angle” (i.e. between the Primary and Secondary full-bridges).
We can always manage the 48V step-down function, using basic turns-ratio. So, if we assume a
desired gain of unity (Gain =1) on the Primary stage, then we can choose to basically design it
for a VOR of 380V (Gain =1). To get 7kW, we will need to apply a reflected load resistor of
(380)2/7000 = 20.7 Ω across that output (or equivalently, 329 m Ω across the stepped-down
48V output). Let us, for now, simply assume we can get close to 100% efficiency.
The good news is that Figure 2.3 was also generated using 100kHz, so no frequency scaling is
required here. Just power scaling and input voltage scaling.
To set the (fixed) phase angle of the Primary stage (“Angle 1” or “intra phase angle”), which will
also be the phase angle within the Secondary side as we always assume, we realize that if we
set this phase angle to less than 90°, we are essentially starving the Secondary side
unnecessarily, and if we set it higher than 90°, we may be overdriving the Primary.
105
But in either case, we get maximum power when Angle 2 (the “inter phase angle”) is exactly
90°. See Figure 2.3 closely to confirm that. Let us break our solution into two basic design
choices:
Option A: Looking at Figure 2.3, we see that for its “Case B” (i.e. Angle 1 equal to 90°), we get
exactly 475W from the converter (Gain =1, and with Angle 2 set to its maximum power level of
90°). But the curves of Figure 2.3 were all generated using 200V input. If we scale that to 380V,
we will automatically get (by the V2 scaling law): (380/200)2 × 475 = 1715 W from the same
converter. But our requirement is 7kW. So, the desired power scaling factor is 7/1.715 = 4.082.
In other words, all we have to do is scale the leakage inductance of Figure 2.3, i.e. 52 µH to
52/4.082 = 12.75µH.
Option B: If we double Angle 1 to 180°, then as per Figure 2.3 (using the alternate vertical scale
shown), we will get exactly twice the power, i.e. 2 × 475 = 950 W. Scaling that from 200V input
to 380V would give us (380/200)2 × 950 = 3430 W . In that case, the power scaling factor is
only 7000/3430 = 2.041. To meet this, all we would need to do is reduce the inductance of
Figure 2.3, by this scaling factor, to 52 µH/2.041 = 25.5 µH.
Let us set up the simulator to test this out. We will use an input of 380V, and place a (reflected)
load resistor of 20.7 Ω, and then drive it with the phase angles: Option A: Angle 1 = 90° Angle 2
= 90°, Llkg = 12.75 µH and Option B: Angle 1 = 180° Angle 2 = 90°, Llkg = 25.5 µH.
In either case, if Figure 2.3 is correct, we should expect to get a VOR of about 380V (because we
picked the Gain =1 curve in Figure 2.3). And if so, it would guarantee 7kW. Otherwise, not.
Naturally, we don’t want over-design, but certainly not under-design!
Results of simulation:
Option A: We get 392V and 7.45 kW. Very good match. To recall, Option A is: Intra-angle 90°,
Inter-angle 90°, Llkg = 12.75µH, fsw =100kHz, VIN =380V, VOR = 380V (Gain = 1.0). Use turns
ratio of 7.9 for getting 48V.
Option B: We get 390V and 7.3 kW. Very good match. To recall, Option B is: Intra-angle 180°,
Inter-angle 90°, Llkg = 25.5µH, fsw =100kHz, VIN =380V, VOR = 380V (Gain = 1.0). Use turns ratio
of 7.9 for getting 48V.
So, within the slight fudge factor of our graphical interface, we are still getting an astonishingly
accurate match to simulations. And as an added, inadvertent benefit, we actually get slightly
higher power than the “ideal” calculation, and that helps us, since in practice we expect the
efficiency to be around 95-98% , not 100% as we assumed initially. A bit later, we will analyze
the differences in Option A and Option B, though both meet our basic requirement.
In Figure 2.15, we have for convenience, design curves for Angle 1 = 180°, and various angles
for Angle 2, not just 90° as in Option B above. It just confirms that we won’t get more power by
making Angle 2 greater than 90°. But this also documents the drop-off on both sides of the peak
in Figure 2.3. We pick a trial point here and do the simulations to confirm the match. It is not a
maximum power angle, but just for checking, we are picking Angle 2 as 45°.
106
This corresponds to a Gain target of 1.25 and Figure 2.15 says that we need a 70 Ω (reflected)
load resistor to pull that off. Note that these curves do not depend on input voltage. Expressing
the reflected output voltage as a ratio of input voltage makes the curve “universal” in a sense.
Figure 2.15: Resistor Lookup chart for case of Primary-side half-bridges fully out of phase
Note that we have introduced a special case with Angle 2 = 26°, for reasons we will describe
later.
In Figure 2.16, we have compared the key waveforms generated by the detailed Mathcad
spreadsheet (from which the easy graphical aid of Figure 2.15 came about), against the
SIMPLIS simulations. In the simulations we used an input of 400V, and used the suggested 70
Ω from Figure 2.15, though the exact value the Math spreadsheet gave us was 69.5 Ω . The
simulations returned 505 V instead of 500 V, which is very accurate indeed.
In Figure 2.17, we look a little more in detail at the waveforms from the simulation and since
the current goes negative in Q1, Q2, Q3 and Q4, just before turn-ON, with a little deadtime we
are, in effect, allowing the body diode (or paralleled diode) across it to conduct, just before we
turn it ON, thus getting ZVS. In Q6 to Q8, we see that the current is zero, just prior to turning
ON, so we have in effect zero-current switching (ZCS).
Is the value of 70 Ω consistent with Figure 2.3? Let’s check. For a gain of 1.25 at Angle 2 of 45°,
the power is exactly between the grid lines for 425W and 450W. Let us say, 437W. Wait a
minute, that vertical axis was for an Angle 1 of 90°. For 180° we will get exactly twice that
power. So, we expect 2 × 437 = 874W. Since Figure 2.3 was generated for an input of 200V,
with a gain of 1.25, we expect VOR = 1.25 × 200 = 250V. To get 874 W from this output would
require a load resistor of (250)2 /874 = 71.5 Ω. Very close to the 70 Ω predicted by Figure 2.15.
107
So there is no contradiction between the two methods, as described by Figure 2.3 and by
Figure 2.15. We can use either method/graph, then apply scaling laws, to meet any
requirement. Using resistor value graphs from Figure 2.15, we do not even need to scale the
input voltage. Because the resistor value for a certain gain applies to any input voltage! It scales
by itself, is one way of looking at this!
Figure 2.16: Comparison between Mathcad and SIMPLIS predictions (test case of Figure 2.15)
In Figure 2.18, we have some more design charts, mainly around the possibility of setting Angle
1 at 45° or 135°. We are focusing only Case 1 (i.e. D1 + D2 <1). Note that, by now, we know that
we always get maximum power only for Angle 2 = 90°, irrespective of Angle 1. All power curves
are symmetrical around Angle 2 = 90°, as per Figure 2.3. In Figure 2.19, and Figure 2.20, we
simulate three test points, arbitrarily chosen from Figure 2.18. The resulting VOR is very close
to what we expected. In Figure 2.20, though we see that in some cases, ZVS can break down, at
some phase combinations. We will discuss this in more detail later.
Previously we had hinted that the “intra phase angle” of each full-bridge (Angle 1, i.e. D1 × π)
should always be set greater than the “inter phase angle” (Angle 2, i.e. D2 × π). A key reason is
that D1 also corresponds to the power capability of the Primary side, and reducing that, is
counterproductive. From Figure 2.21, we can easily see that one impact of that is the angle
available for drawing in DC input current is substantially reduced, because the AC (circulating
current) is relatively higher and takes longer and longer to reset. This refers to the triangular
areas mentioned when we discussed Figure 2.13.
108
Figure 2.17: Checking for soft-switching in the case presented in the preceding figure
109
Figure 2.18: More design graphs, focused on Case 2 (D1 + D2 <1)
110
Figure 2.19: Simulation results for three test cases, showing good agreement
111
Figure 2.20: Continued simulation results from preceding figure
112
Figure 2.21: What happens if D2 is made greater than D1
113
2.12 The Two choices for Angle 1 for the same Power and Different Gain Targets
One of the most fundamental questions for a DAB designer who wants to keep it simple: should
we set Angle 1 as 90° or 180°? We can see from Figure 2.3 that both give an optimum shape for
the power curve, incorporating no “flat” region around Angle 2 = 90°. We also realize that the
Angle 1 =180° curve gives twice the power than the Angle 1 = 90° case, albeit with different
leakage inductances. However we could consider running the Angle 1 = 180° converter with
not Angle 2= 90°, but with a smaller phase angle, to deliver exactly the same power we would
get for the case of Angle 1 = 90° and Angle 2 = 90°. So what is that phase angle?
Looking at Figure 2.3 closely, we will see that all the curves for Angle 1 equal to 90° or 180°,
irrespective of set gain, end up at half power when Angle 2 is reduced from 90° to exactly 26°!
Which means that for a given desired power at a certain desired gain, we have two distinct
possibilities. Let us call these “Converter A” and “Converter B” for convenience.
Converter A: Angle 1 = 180° and Angle 2 = 26°
Converter B: Angle 1 = 90° and Angle 2 = 90°
That is why in Figure 2.15, we specifically provided the curve for Converter A case above.
So for the base kernel on which Figure 2.3 was based, we have calculated the required load
resistor, and thus expected VOR and power output. The comparative results are plotted in
Figure 2.22. In fact since another key question in mind is, what should be the target gain? We
can of course get our desired output rail using turns ratio of the transformer, but the underlying
DAB can be set for any desired gain, step-up or step-down. So what is a good choice? Also, to
guarantee high efficiency.
We realize that we would like to introduce a higher circulating current component, just to
instigate ZVS, but that also leads to higher conduction losses in the bargain. So, it is a tradeoff
of course.
However, by staring at Figure 2.22 a bit we can make the following conclusions:
a) In general, Converter A case is preferable
b) In general, it seems a good idea to target a gain of unity
With that choice we get not only soft-switching at maximum load, but also much lower
transistor RMS currents, so lower conduction losses too.
It is also very interesting that the RMS current of all transistors Q1 to Q8 is identical in all cases!
Despite the very different current shapes! A proof that nature attempts to distribute stresses
evenly, if we let it! And the DAB does exactly that by providing eight transistors, distributed
evenly on both sides.
114
Figure 2.22: Comparing the two possibilities for setting Angle 1
115
2.13 What if: We run the DAB in synchronous mode?
In Figure 2.23, this last question is answered too. Basically, the power drops significantly, but
we do get soft switching again, thanks to the presence of the inductor. Had we added an output
choke, we could have added control of output voltage too, to this “synchronous DAB” of ours.
That is how we go back in time and reinvent the PSFB!
Here we will use Figure 2.3 for a quick example to reinforce concepts.
Design a DAB converter for 200kW, from 300V input to 800V output. It is preferred to operate
this at 85kHz.
Looking at Figure 2.3, the recommended reflected load resistor is 84Ω with 52µH for a Gain of
1, as recommended, starting with Case B. Note that this resistor value does not depend on input
voltage! Or on frequency!
If we place this resistor across the reflected output voltage of 300V (since Gain =1), the power
we will get (from the kernel on which Figure 2.3 was based) is:
VOR 2 3002
PINIT 1.071 kW
R 84
But we want 200kW. The desired power scaling factor thus is:
PO 200k
PSC 186.74
PINIT 1.071k
Finally, we present the results of the simulations (with a 1:1 transformer). So, we expect to
arrive at a VOR of ~300V, since we set Gain =1 (preferred). We have used 0.33µH leakage
inductance, and a reflected load resistor of 0.45 Ω, and we can confirm we achieved a bit over
200kW. Figure 2.24 is for the baseline case of Angle 1= Angle 2 = 90°. We get exactly a bit over
200kW with 300V reflected output. But not necessarily soft-switching! In Figure 2.25 we show
exactly the same circuit, but driven at the preferred setting of Angle 1= 180° and Angle 2=26°.
As before, we see we reached a reflected output voltage of around 300V,
116
Figure 2.23: Operating the DAB in synchronous mode
117
Figure 2.24: SIMPLIS simulation of 200kW converter for Angle 1= Angle 2 = 90°
commensurate with a target gain of unity. And thus, an output power of a bit over 200kW as we
expected. With this latter configuration, we can see we have full soft-switching assured! So, this
is our final design target: 0.33µH, 85kHz! That’s all!
Indeed, we want to deliver this power to 800V, so we need a turns ratio of n=NP/NS = VO/VOR =
800/300 = 2.667.
This will deliver 200kW from an output of 800V with full soft-switching if we prefer to set the
intra-phase angle of both full-bridges as Angle 1= 180°, and the phase lag between Primary and
Secondary full-bridges as Angle 2=26°. That completes the 200kW design example! Based on
just one graphical aid, namely Figure 2.3. Such is the power of scaling as applied to DAB too.
118
Figure 2.25: SIMPLIS simulation of 200kW converter for Angle 1= 180° and Angle 2=26°
119
CHAPTER 3 – SMALL SIGNAL MODELING
(GUEST CHAPTER)
3.1 Intro
In 2013 I got my Master’s degree in Power Electronics. The same year I became part of Thales
Alenia Space power electronics team in Rome, Italy and I was almost sure to be ready designing
power systems for satellite applications. None of all the beliefs I had during my professional
career was more wrong than that one.
The second working day the boss showed me an A1 foil with something about 500-600
components. It was the schematic of a “simple” Flyback converter used for satellite applications.
I realized immediately I was not ready for that job. The scenario becomes worse one week later
when the boss assigned to me new specs for a new project. It was always a Flyback converter.
That time, and nowadays as well, one of the most crucial aspects of a power stage was (and is)
the control section design. A lot of components linked all together just to get what we call simply
“current mode control” or “voltage mode control”. Something that by using an electrical
simulator we obtain with a small bunch of components. A-Z 2nd chapter 12 focuses on the
feedback loop analysis and stability concepts and gives to the reader a basic understanding
about two different control strategies: voltage model control (VM) and current mode control
(CM). Both assumed in continuous conduction mode (CCM) for sake of simplicity.
Independently from the adopted control strategy, a DC-DC or Offline converter design needs to
be compliant to different requirements. Two of them have been fully explained in the chapter
12 and they are the a) Loop gain and phase that have an impact on the stability and the b)
audiosusceptibility that ‘quantify’ the transmission of the noise from the input section to the
output section. By the way there are two other important “transfer functions” intrinsically
related to a power stage general requirements: they are c) the output regulation related to the
output impedance and d) the sensitivity of the power system to input filter interaction related
to the input impedance. With these four transfer functions we will be able to describe the
converter behavior almost completely. This is one of the reasons why introducing a power stage
linear model can be very useful to the reader in both directions: understanding how the
converter fully behaves into the frequency domain AND designing the control loop properly.
Let’s start by focusing on the control to output plant transfer function G(s) assuming Voltage
Mode Control in Continous Conduction Mode. The intuitive way to get a simplified closed form
equation for G(s) is to multiply all terms involved in the loop sketched on Figure 12.10 p. 457 -
A-Z 2nd.
120
Output Output BOOST &
Filter RHP
Capacitor BUCK
𝑳𝑪 Zero
ESR BOOST
Modulator Gain: 𝝏 DC Gain:
START 𝟏 𝝏 𝑽𝑶𝑼𝑻
𝑽𝑹𝑨𝑴𝑷 𝝏𝑫 Output Output
Filter Capacitor BUCK
𝑳𝑪 ESR
Figure 3.1: Intuitive diagram to identify all elements involved into the control to output TF
evaluation for basic topologies (Right Half Plane zero included). Voltage mode CCM assumed.
At this point some questions are spontaneous:
Why, for “not-buck” topologies, the equivalent inductance for the plant transfer function
equation needs to be considered equal to L=L/(1-D)2 ?
Where does the right half plane zero come from?
Where do formulas come from?
Are the formulas correct?
How can we extract Bode diagrams starting by a non linear circuit in order to verify all
previous equations?
All these questions make the VM control theory not so intuitive. Sketching time domain
waveforms by using pen and paper is simple but extracting gain, poles and zeros for each
topology is another story because power converters are non linear systems by their nature.
The situation becomes worse if we focus on the Current Mode case in CCM. The current mode
control has its pros and cons with respect to the voltage mode control. I personally invite the
reader to focus on two sentences always reported on related technical books and articles in
particular:
It can be shown that using this technique (CM) the inductor effectively goes “out of the
picture” in the sense that there is no double LC pole anymore.
Is this true? How is it possible to verify this statement?
Everyone seems to agree the current-mode control alters the poles of the system compared
to voltage-mode control, but the zeros of voltage-mode control remain unchanged
Let’s focus on the previous two bullets. If zeros are unchanged it means RHP is present if
Current Mode is used. The RHP by its nature involves the inductance to be calculated. But at
the same time the Current Mode approach “pushes the inductor out of the picture”! It seems a
paradox.
A-Z 2nd page 503 reports a table that explains all involved quantities to plot the control to output
TF in current mode. Surely the current mode control is less intuitive than the voltage mode
control but if we try to sketch the same flow chart as we did for the voltage mode control we
get:
121
Output RHP
Filter
Output
Capacitor
Buck
Zero
𝑹𝑪 𝒑𝒐𝒍𝒆 ESR Boost
Output Output
Filter Capacitor Buck
RC 𝒑𝒐𝒍𝒆 ESR
Figure 3.2: Intuitive diagram to identify all elements involved into the control to output TF
evaluation for basic topologies (RHP zero included). Peak Current mode CCM assumed.
Now, observing the above diagram added questions complicates the situation a bit more.
How to calculate the RC pole? And the RHP zero? Does it depends from the duty cycle?
How the Modulator Gain is calculated?
More than everything: How is it possible that the current mode control time domain
waveforms are identical to the one we get using voltage mode control while transfer
functions – frequency responses – differ?
Extracting all transfer functions for a specific power stage that uses a specific control method
is something not so easy. Linear modeling will be introduced to fix all these questions showing
a unique approach valid for all topologies. The only thing we need to do is to map the cycle by
cycle non linear circuit (that involves mosfets and diodes) into something that uses linear
components or controlled sources. Generally speaking we can say linear modeling approach
allows one to:
Get a full understanding about how to design the feedback loop
Evaluate parasitic elements effect on power stage stability
Predict the transient response including undershoot and overshoot if a load-step is
applied.
“Verify” the plant behavior independently by the control strategy used or the power
stage selected
Simplify the full converter analysis
What is not possible to do with this method:
The presented technique doesn’t work with variable frequency converters. We will see
a linear model automatically locks the switching frequency to be extracted. It is evident
that for variable frequency converters we would have “infinite” linear models. For those
topologies all transfer functions are calculated by using piecewise linear functions. The
predicted transfer function is obtained by exciting the converter at different frequencies
spaced linearly in the frequency domain and storing two info: gain and phase for each
frequency point.
122
3.2 Voltage Mode Control PWM switch
Despite we did our job even for the DCM case for both control strategies, in alignment with the
entire A-Z 2nd book the Voltage Mode linear model will be discussed for CCM case only for sake
of simplicity. Three easy steps are needed to get this model:
a) Identify non linear elements and define active, passive, and common terminals
common: is the common terminal between the diode and the power switch
active: is the other end terminal related to the Active Switch (MOS)
passive: is the other end terminal related to the Passive Switch (Diode)
Buck Boost
Current goes into the a terminal
Current goes out to the c terminal
Current goes into the p terminal
Buck
Current goes into the a terminal
Current goes out to the c terminal
Current goes into to the p terminal
Boost
Current goes into the a terminal
Current goes into the c terminal
Current goes out to the p terminal
Figure 3.3: Identifying PWM switch terminals for the three basic topologies
123
c) Average the voltage waveforms across the PWM Switch and build it
V =V
V = V = D ∙ V (note the voltage mean value on the inductor is zero)
V =V −V
It will be shown the DC bias point can be “emulated” by using a two port cell: a current
controlled current source on the primary and a voltage controlled voltage source on the
secondary. Note also that both controlled sources can be replaced by an ideal transformer with
turns-ratio N1:N2=1:D. The Buck Large signal model is shown on the next page.
If the PWM switch is “fully invariant” it means the large signal model of a cycle by cycle
converter can be obtained by replacing the non linear three port cell (which identifies the
couple MOS + DIODE) with a linear invariant three terminal cell represented by an ideal
transformer with turn ratio N1:N2 = 1:D. The power of this approach is that it is applicable to
all topologies.
From cycle by cycle to Linear
Figure 3.4: From cycle by cycle model to linear model. Case Study: Buck converter in Voltage
mode (CCM assumed)
124
Figure 3.5: Same model we got before 4 by placing the ideal transformer
A small signal version is acquired by linearizing the large signal model. We need to focus on the
inserted “controlled sources” only because all the rest of the network is linear per se. One
Current Controlled Current Source is 𝐼 = 𝐷 ∙ 𝐼 . It involves two variables: D and I . Applying
the partial derivatives procedure we get:
𝜕 𝜕
𝐼 = (𝐷 ∙ 𝐼 ) ∙ 𝐷 + (𝐷 ∙ 𝐼 ) ∙ 𝐼 → 𝐼 = 𝐼 ∙ 𝐷 + 𝐷 ∙ 𝐼
𝜕𝐷 𝜕𝐼
Same story for the Voltage Controlled Voltage Source, two variables are involved: D and V . So
applying the partial derivatives procedure we get:
𝜕 𝜕
𝑉 =𝑉 = 𝐷∙𝑉 ∙𝐷+ 𝐷∙𝑉 ∙𝑉 → 𝑉 =𝑉 =𝑉 ∙𝐷+ 𝐷∙𝑉
𝜕𝐷 𝜕𝑉
Small signal 𝑉 ∙𝐷
𝐼 ∙𝐷
Large
signal
Figure 3.6: AC three terminal invariant PWM switch cell valid for Voltage Mode CCM assumed
125
Small signal
𝑉
∙𝐷
𝐷
Large
𝐼 ∙𝐷 signal
Figure 3.7: Conventional AC three terminal invariant PWM switch cell valid for Voltage Mode
CCM assumed. VCVS placed on the primary
a) Identify non linear elements and define active, passive, and common terminals
c) Average the voltage waveforms across the PWM Switch and built it
𝑉 = 𝑉 = 50
𝑉 = 𝑉 = 30
𝑉 = 𝑉 − 𝑉 = 20
The following simulation’s proof of concepts have been performed using TINA-TI spice based
simulator by DesignSoft company. Its free version is much more user friendly than LTspice
(free as well) and powerful if “industrial” version is used. A free TINA-TI version can be
downloaded here: http://www.ti.com/tool/TINA-TI. The universal three terminal PWM switch
in Voltage mode CCM is shown below:
126
MODULATOR
D
1/Vramp 333m
+ +
- -
SMALL SIGNAL MODEL
VCVS1(Vap/D) 83,33
+
-
C
+
-
A
CCCS1(Ic) 600m +
+ +
-
- -
transformer
P
Figure 3.8: Universal AC (and DC?) PWM switch cell structure valid for CCM VM.
Note: There are four controlled sources and four terminals: active, passive, common, duty
(external). Only controlled sources values change, the “skeleton” is always the same. Further
the modulator gain 1/V ramp has been included into the model (refer to p.458 – A-Z 2 nd)
Checking the DC bias point we can see a perfect match with calculated values. Further, due to
the insertion of the modulator gain, the control voltage is scaled up by V ramp ∙ Duty = 3 ∙ 0.6 =
1.8. It is set externally.
𝐼 = 360𝑚𝐴
Vout 30V
U1 CCM VM
𝐼 = 600𝑚𝐴
D
R1 50
C1 1,25u
Vin 50 𝑉 = 𝑉 = 50
Ip
240mA
𝑉 =𝑉 = 30
𝑉 =𝑉 −𝑉 = 20
127
Some considerations are mandatory:
First, the duty cycle is intrinsically included in the model. Its value is D = 0.6 and is visible
in the controlled sources CCCS1, CCCS2 and VCVS2 on Fig. 3.8. It means the external
control variable (terminal “D” on the PWM yellow block Fig. 3.9) shall be zero (or that
point shall be grounded) to properly evaluate the DC bias point. But doing this would be
no good for us because when we are called to close the loop we need the “control
voltage” terminal to be set externally (it will be the output of the feedback OPAMP) and
cannot be grounded.
The situation is complicated by the fact that the linearization process we performed to
get the 3-terminal small signal cell has the following disadvantage: the DC point is lost
during the linearization process. So, in theory, the large signal model shall be used to
evaluate the DC bias point and the small signal model shall be used to evaluate the AC
frequency response. Vorperian AC model – in voltage mode – suffers in storing the DC
bias point.
Is it possible to use one circuit to do both? Using different simulation files and different
circuits to model the same power stage just to perform DC and AC analysis is annoying, and
errors can be produced by using different files for the same power stage (change in
parameters, or conditions etc.). The answer is: yes! And the circuit setup is shown on Fig
3.9. Two external generators needed: one works in DC only, the other one works in both AC
and DC. During the DC analysis they cancel each other shutting down VCVS1 and CCCS1
(the transformer acts alone). During the AC analysis all DC values are “not visible” and the
complete AC small signal model is fully restored.
PWM Switch flexibility is compared with the control to output TF and the audio susceptibility
TF reported on A-Z 2nd p. 463 – 464 for the Buck Converter. Note we call “Gc2o” the control to
output transfer function and “GL2o” line to output transfer function.
Bode Diagram
50
Gc2o A-Z 2nd p.463
PWM Switch
Gauss Chart
0
80.00k
-50 T
60.00k
40.00k
-100
0
Imaginary part
20.00k
0.00
-45
-20.00k
-90
-40.00k
-135 -60.00k
-80.00k
-180
3 4 5 6 7 8 -9.00k -7.50k -6.00k -4.50k -3.00k -1.50k 0.00 1.50k 3.00k
10 10 10 10 10 10
Frequency (Hz) Real part
Figure 3.10: comparing PWM switch Gc2o TF versus Maniktala G(s) p. 463. Curves are perfectly
superimposed.
128
Bode Diagram
50 No Gauss Chart needed for the line to
GL2o A-Z 2nd p.464
PWM switch output TF because poles and zeros are
0 identical to the control to output TF.
-100
-150
0
-45
-90
-135
-180
3 4 5 6 7 8
10 10 10 10 10 10
Frequency (Hz)
Figure 3.11: comparing PWM switch GL2o TF versus Maniktala audiosusceptibility G(s) p. 464.
Curves are perfectly superimposed.
Note the presence of complex poles (crosses). No zeros (circles) shown on the Gauss chart for
the buck topology: Right Half Plane zero (RHP) absent and ESR zero absent. In particular no
ESR zero is present because we didn’t include it. It will be automatically placed on the Gauss
chart if included into the simulation (see Boost and Buck Boost Example).
129
BOOST Example BUCK-BOOST Example
Input parameters Input parameters
fSW [Hz] = 100 ∙ 10 ; Vin [V] = 10; Vout [V]= 40; L [H] = 100 ∙ 10 ; fsw [Hz] = 100 ∙ 10 ; Vin [V] = 35; Vout [V]= 100; L[H] =260∙ 10 ;
C [F]= 470∙ 10 ; R [Ω]=20; Vramp [V]=5; ESR_C [Ω]=100∙ 10 ; C [F]= 5 ∙ 10 ; R [Ω]=150; Vramp [V]=5; ESR_C [Ω]=85∙ 10 ;
ESR_L=70∙ 10 [Ω]; ESR_L=85∙ 10 [Ω];
D=1− = 0.75 ; I = =2; Mod_gain = 1/Vramp= 0.2 D= = 0.74 ; I = =666mA; Mod_gain=1/Vramp= 0.2
a) Identify non linear elements and define active, passive, and a) Identify non linear elements and define active, passive, and
common terminals. Pay attention to the reference signs used common terminals. Pay attention to the reference signs used
for the currents coming in or out to the PWM switch cell used for the currents coming in or out to the PWM switch cell used
for the Buck Converter! They need to be the same to profit of for the Buck Converter! They need to be the same to profit of
the fully invariant feature. Refer to Fig 3.3 the fully invariant feature. Refer to Fig 3.3
b) Average the current waveforms across the PWM Switch b) Average the current waveforms across the PWM Switch
(Table 4.1 A-Z 2nd ). (Table 4.1 A-Z 2nd )
𝐼 = −𝐼 = −𝐼 ∙ = −6 𝐼 =𝐼 =𝐼 ∙ = 1.9
𝐼 = −𝐼 = − = −8 𝐼 =𝐼 = = 2.56
𝐼 = −𝐼 = −𝐼 = −2 𝐼 =𝐼 = 𝐼 = 666𝑚
c) Average the voltage waveforms across the PWM Switch and c) Average the voltage waveforms across the PWM Switch and
built it (Fig 3.7) built it (Fig 3.7)
𝑉 = −𝑉 = −40 𝑉 =𝑉 +𝑉 (𝐵𝑢𝑐𝑘 𝐵𝑜𝑜𝑠𝑡 𝑖𝑠 𝑖𝑛𝑣𝑒𝑟𝑡𝑖𝑛𝑔) = 135
𝑉 = −𝑉 = −10 𝑉 = 𝑉 = 35
𝑉 =𝑉 −𝑉 = −30 𝑉 =𝑉 (𝐵𝑢𝑐𝑘 𝐵𝑜𝑜𝑠𝑡 𝑖𝑠 𝑖𝑛𝑣𝑒𝑟𝑡𝑖𝑛𝑔) = 100
External Excitation = D ∙ VRAMP = 3.75 External Excitation = D ∙ VRAMP = 3.7
130
3,75V Duty DC x Vramp 3.75 3,7V Duty DC x Vramp 3.7
Duty
Duty
+
+
Vout 37,88V
Vout
U1 CCM VM -98,79V
U1 CCM VM
D
D
ESR_L 70m L 100u Ia 1,87A
A CCM VM C A CCM VM C
Ic -7,58A Ip -1,89A
C 470u
P
P
C1 5u
R1 150
Ip 658,58mA
R 20
Vin 35
Vin 10 Ia
ESR_C 100m
-5,68A
R3 85m Ic 2,53A
R2 85m
L1 260u
Note1: DC Bias point not lost if AC model is used. It has an “invariant” skeleton, Note1: DC Bias point not lost if AC model is used. It has an “invariant” skeleton,
but different values (Fig 3.8). but different values (Fig 3.8).
Note2: DC Vout almost equal to the ideal value due to parasitics ESR_L. Also the Note2: DC Vout almost equal to the ideal value due to parasitics ESR_L. Also the
ESR zero will be visible. ESR zero will be visible.
T 10.00k
2.00k
T
1.00k 5.00k
Imaginary part
Imaginary part
-1.00k -5.00k
-2.00k -10.00k
-30.00k -20.00k -10.00k 0.00 10.00k 20.00k -3.00M -2.00M -1.00M 0.00 1.00M
Real part Real part
RHP zero, ESR zero, complex poles visible on Gauss chart RHP zero, ESR zero, complex poles visible on Gauss chart
131
Collecting Closed Form Equations for the VM CCM using PWM Switch model
s s s
V 1+ω 1 V 1−ω ∙ 1+ω
G(s) = ∙ G(s) = ∙ ∙ G(s) same as for Boost with negative sign
Control V s s V (1 − D) s s
+ +1 + +1
ω ω ∙Q ω ω ∙Q
To ω ; ω ; Q ; L ; same as for Boost
1 R L
Output ω = ; ω ∙Q= ; L=
1 R LC L (1 − D)
ω = ; ω ∙Q= R (1 − D)
√LC L 1 R ω → differ → ∙
1 ω = ;ω = ∙ (1 − D) ; L D
ω = C ∙ ESR L
C ∙ ESR
s s s
D∙ 1+ 1 1+ −D 1+
Line ω ω ω
G(s) = G(s) = ∙ G(s) = ∙
To s s 1−D s s 1−D s s
+ +1 + +1 + +1
ω ω ∙Q ω ω ∙Q ω ω ∙Q
Output
ω , Q, ω same as above ω , Q, ω same as above (with L) ω , Q, ω same as for Boost
s s s s s s
R ω +1 + + +1 𝑅 ∙ (1 − 𝐷) + +1
Input ω ∙Q ω ω ∙Q ω ω ∙Q
Z (s) = ∙ s Z (s) = 𝑅 ∙ (1 − 𝐷) ∙ s Z (s) = ∙ s
Impedance D 1+ 1+ 𝐷 1+
ω ω ω
ω , Q same as above; ω =
ω , Q same as above with L ; ω = ω ; Q; ω same as for Boost
s s
𝑠∙𝐿∙ 1+ 𝑠∙L∙ 1+
Output ω ω same as for Boost
Z (s) = Z (s) =
Impedance s s s s
+ +1 + +1
ω ω ∙Q ω ω ∙Q
ω , Q, ω same as above ω , Q, ω same as above (with L)
Table 3.1: Collecting Closed Form Equations for the VM CCM Buck, Boost and Buck Boost.
132
Last thing to do before closing Part 1 of this chapter is to have a perception about what happens
if we compare the Linear Model VS the Cycle by Cycle model for a case study. Intuitively the first
advantage is the speed analysis. We get estimated results in seconds compared to cycle by cycle
real model. Second, the linear model should remove the AC high frequency component. We can
verify these statements closing the loop for the Buck case study as example.
Referring to A-Z 2nd Figure 12.16 p. 480 and Figure 12.19 p. 489 we know the type 3
compensator gives the higher phase boost and introduces 2 poles and 2 zeros (plus one pole at
the “origin”). As suggested we place 2 zeros at LC pole frequency; 1 pole at the ESR zero; 1 pole
at 10 ∙ 𝑓 . Setting fcross equal to 15kHz and assuming ESR_C=100mΩ for the Buck case study
we get:
1 1
𝑓 → = 1.27𝑀𝐻𝑧; 𝑓 → 10 ∙ 𝑓 = 150𝑘𝐻𝑧; 𝑓 = 𝑓 → = 10𝑘𝐻𝑧
2𝜋 ∙ 𝐸𝑆𝑅 ∙ 𝐶 2𝜋 ∙ √𝐿𝐶
By locking R1=10kΩ we get: C1 = 45nF, R2 = 356Ω, C2 = 352pF, R3 = 666Ω, C3 = 1.6nF.
T 100.00
Gc2o x Type3Comp
Gc2o
0.00
Gain (dB)
-100.00
Phase margin: 36.41
at frequency (Hz): 15.17k
-200.00
200.00
100.00
Phase [deg]
0.00
-100.00
-200.00
10 100 1k 10k 100k 1M 10M 100M
Frequency (Hz)
If we remember Bode criteria we can say the system is stable. At the same time we know 36°
degrees phase margin (for this particular case) is not so good. Practically speaking a margin
lower than 45° (worst case) is not acceptable because the system “rings” too much when a step
load is applied and takes a long time to reach its steady state.
Further, the numbers we have used for this example have been selected randomly than for sake
of simplicity we will neglect the optimization aspect because we are interested to crosscheck
linear vs cycle by cycle behavior. At the same time we are well aware an optimization step is
required by practical point of view.
133
Vout 30V
U1 CCM VM
D
Ia 360,9mA Ic 601,5mA L 200u
A CCM VM C
t
response and linear time domain analysis. Toggle
SW2
SW1
C 1.25u
both feedback switches up to get linear transient
t
ESR_C 100m
R 50
C2 352p
R3 666
Vin 50 analysis (loop chain not broken). Toggle both
Ip
240,6mA C1 44.7n R2 356
R4 50
switches down to get AC bode diagrams for plant and
R1 10k
plant + compensation (loop chain broken).
C3 1.6n
OP1
SW-SPDT1
4
2
-
6
+ 3
+
Rd 10k
SW-SPDT2 L1 1k
Bottom Left Figure. Cycle by cycle model.
7
V1 15
C4 1k
V2 20
+
t
+ Voff 1
SW3
SW2
C 1.25u
SW1
T 33.60
-
t
D1 Dbreak
ESR_C 100m
C2 352p
R3 666
Vin 50 32.60 50% step load both directions Linear
C1 44.7n R2 356
R4 50
Ip
31.60
R1 10k
Voltage (V)
C3 1.6n
U6 Id. Comp. Vctrl OP1 30.60
4
2
-
VH
V+
6 29.60
Vout + 3
+
Rd 10k
V3 10
VL
V- Vtriang
7
V1 15 28.60
+
V2 20 27.60
26.60
Mos + diode + Modulator
900.00u 1.10m 1.30m 1.50m
Time (s)
Figure 3.13: Top Figure: closed loop Linear. Bottom Figure: closed loop Cycle by Cycle. Bottom right: comparing Linear step load VS Cycle by
Cycle step load
134
Actually V. Vorperian method can be used to get a linear model even for slightly complex
topologies such as: Cuk, Zeta, Sepic, Isolated etc. A question now comes spontaneously: What
about isolated topologies such as Forward, Flyback, Push Pull etc? Is it possible to use the PWM
switch approach? If yes, where is it convenient to place the PWM switch on primary or on
secondary? Do main equations differ drastically?
First, it is possible to use Vorperian approach even for isolated (fixed frequency) topologies.
Second, it is convenient to place the PWM switch on the primary side simply because it is much
easier to simulate a linear model for multi-output converters. If the output is only one, then we
can choose arbitrarily where to place the PWM switch: primary or secondary.
Flyback and Forward linear models can be built referring to Figure 3.2 A-Z 2 nd p.130. Simply by
mapping all the secondary side components to the primary side we can cut out the main power
transformer getting an equivalent buck (forward case) or an equivalent buck-boost (flyback
case). In this way we can get a quick examination of the four main transfer functions by using
the same approach used for the three main topologies. In particular, given specific input
parameters, this time including primary turns Np and secondary turns Ns with n=Np/Ns, all
we need to do is to map the isolated converter into the equivalent one seen on the primary side.
Following equations are valid for both cases: Flyback and Forward. Only the inductor value
changes between both topologies.
𝑉 =𝑉 ∙ 𝑛; 𝐶 = ; 𝐸𝑆𝑅 = 𝐸𝑆𝑅 ∙𝑛 ; 𝑅 =𝑅 ∙𝑛 .
𝐿 =𝐿 ∙𝑛 𝑤ℎ𝑖𝑙𝑒 𝐿 =𝐿
We can simply use the same Buck (Forward case) or Buck- Boost (Flyback case) assuming the
equivalent circuit has been correctly mapped.
Forward Flyback
Function (Buck derived) (Buck-Boost derived)
n = NP/NS n = NP/NS
Control 𝟏 𝟏
G(s) = ∙ G(s) G(s) = ∙ G(s)
To 𝐧 𝐧
Output with positive sign
Line 𝟏 𝟏
G(s) = ∙ G(s) G(s) = ∙ G(s)
To 𝐧 𝐧
Output with positive sign
1 1
Z (s) = ∙ Z (s) Z (s) = ∙ Z (s)
Output 𝐧𝟐 𝐧𝟐
Impedance ω , ω , Q don t change ω , ω , Q don t change
same as Table 1 same as Table 1
135
VOLTAGE MODE CIRCUIT COLLECTION USING PWM SWITCH CCM ASSUMED
Vout Vout
U1 CCM VM U1 CCM VM U1 CCM VM
D
D
D
L 100u
Ia Ic L 200u Ia Ic L 200u A CCM VM C
A CCM VM C A CCM VM C Ic Ip
C 470u
P
P
P
C 1.25u
C 1.25u
R 20
ESR_C 100m
ESR_C 100m
Vin 10
R 50
N1
Ia
ESR_C 100m
R 50
Vin 50
Ip
Vin 50
Ip
N2
U14 XFMR1
RATIO=200m
D
Ia L1 800uH R6 500mOhm C2 100uF R14 200mOhm Vout
VF1
A CCM VM C
N1
U1 CCM VM Ip
C 5u
D
P
L 100uH
R2 70mOhm
CCM VM
A
A C
U1 CCM VM
R1 150
Ic Vin 30V
Rload 10Ohm
CCM VM
ESR_C 100mOhm C 470uF
P
Vin 35
ESR_C 100m
D P
Ic
R 20Ohm
C1 100uF
Vin 10V
L1 260u
Ia
Ip
C
Tapped Boost Buck Boost SEPIC
U2 CCMVM. U1 CCMVM.
M1 C4 4uF
L1 960uH Lm 1.9mH
d^
POWER XFO 2
d^
R2 70mOhm
P
P
C1 2.2u
A
U1 CCM VM
R2 30.6m C1 13.72u
Vin 20V
Rload 2Ohm
V1 50
R1 54
CCM VM
R1 28
R3 85m
V1 35 D P
R2 28m
L1 259.26u
C1 47uF
C
FORWARD (Buck + Transformer) CUK
FLYBACK (Buck Boost + Transformer)
Figure 3.14: Circuit Collection using PWM Switch. The structure is applicable to both cases: Voltage mode and Current Mode (CCM assumed)
136
3.4 Peak Current Mode PWM switch
Don’t be afraid in linearizing a power stage that uses Peak Current Mode Control (PCMC).
Despite it appearing to be a difficult procedure, extracting a PCMC linear model is easier than
you think if you know how to do it. V. Vorperian approach is simple and elegant and any power
electronics book should include a tribute.
There are tons of documents in literature concerning this topic, most of them quite confusing
or hard to understand. In this paragraph it will be shown how to extract PCMC linear model by
using the same three steps we used in voltage mode. For sake of simplicity CCM case will be
discussed in this book.
Assuming all PWM switch features verified in voltage mode remain valid even for PCMC linear
model, just one big difference deserves to be taken into account with respect to the voltage
mode control. This difference is identified in the possibility to incur subharmonic oscillations
(refer A-Z 2nd p.94). Then the PCMC linear model will be much more efficient if it automatically
adapts in presence of a ramp compensation foreseen by design. We recall that ramp
compensation is necessary when the power stage operates in CCM with a duty cycle
approaching 0.45-0.5 (or greater).
As we did for the voltage mode control the Buck Converter case will be discussed as a reference
example to extract the PCMC linear model. Then the PWM switch “fully invariant” property will
be used to manage Boost, Buck Boost, and derived topologies (Flyback, Forward, Sepic, etc…).
The effort in extracting the PCMC PWM switch in CCM is related to “how much” the reader
understands the following picture: how the peak current mode modulator works.
It includes two waveforms: the inductor current we get with (solid) or without (dotted) ramp
compensation. If a compensation current ramp with an amplitude VPP/RMAP is added to the
control signal (to be pedantic “subtracted”) then both inductor currents peaks differ.
Figure 3.15: Recalling A-Z 2nd p. 497-498. The current mode modulator “decreases” the sensed
inductor peak current if the external ramp is used
137
From the picture above we can type two main equations. Then we can apply the same steps we
did for the voltage mode.
𝑉 𝑉 |𝑆|
1) 𝐼 = −𝑥 = − ∙𝐷∙𝑇
𝑅 𝑅 𝑅
Δ𝐼 𝑉 |𝑆| 1
2) 𝐼 = 𝐼 − = − ∙ 𝐷 ∙ 𝑇 − ∙ |𝑆 | ∙ (1 − 𝐷) ∙ 𝑇
2 𝑅 𝑅 2
a) Identify non linear elements and define, active, passive, and common terminals
Same as voltage mode.
c) Average the voltage waveforms across the PWM Switch and build it
𝑉 =𝑉
𝑉 =𝑉 =𝐷∙𝑉
𝑉 =𝑉 −𝑉
To give the reader a full understanding about how to extract a small signal linear model, the
same method used for voltage mode case will be applied here as well. We need just to focus on
𝐼 and 𝐼 equations applying the partial derivatives procedure.
DC equations AC equations
𝑉 |𝑆| 𝑉 𝑉 𝑇 𝑉 𝐼 =𝑘 ∙𝑉 +𝑔 ∙𝑉 +𝑔 ∙𝑉
𝐼 = − ∙ ∙𝑇 − ∙ ∙ 1−
𝑅 𝑅 𝑉 𝐿 2 𝑉
𝑉 𝐼 = 𝑘 ∙𝐼 +𝑔 ∙𝑉 +𝑔 ∙𝑉
𝐼 =𝐼 ∙
𝑉
Second, the quantity 𝑔 (Table 3) needs more attention than all other terms. It quantifies
the third component of the AC current 𝐼 and relates a part of the “common” terminal
current 𝐼 to its “common to passive” voltage 𝑉 . At least intuitively it can be modeled
as a resistor equal to R0=1/𝑔 placed between the common to passive terminals (Figure
3.16). On the other hand, if a resistor model will be used, a “sign mismatch” between the
term 𝑔 ∙ 𝑉 and the electrical representation as resistor R0 will appear. This mismatch
is due to the fact that for a buck converter the load current goes out the “common
terminal” toward the load resistor we called R. At the same time it is mandatory from
the electric circuit theory to guarantee the passive sign convention on the equivalent
resistor R0. The passive sign convention on R0 imposes the current to flow out the “c”
node towards “p” terminal (Figure 3.16). A simple way to fix this sign mismatch is to
consider g0 in its opposite sign when partial derivative approach will be used to get AC
model. This consideration is valid for 𝑔 only and not for 𝑔 because for 𝑔 the sign is
preserved correctly.
𝑘 ∙𝑉
𝐼
𝑔 ∙𝑉
𝑔 ∙𝑉
Passive sign
convention on R0
Figure 3.16: Understanding the minus sign for the third Ic component
139
Same story for 𝐼 :
|𝑉 | |𝑆| ∙ 𝑇 𝑉 |𝑉 | ∙ 𝑇 𝑉
𝐼 = − ∙ − ∙ 1− → 3 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 𝑉 , 𝑉 , 𝑉
𝑅 𝑅 𝑉 2∙𝐿 𝑉
𝜕 1
→ (𝐼 ) ∙ 𝑉 = ± ∙ 𝑉 = ±𝑘 ∙ 𝑉
𝜕𝑉 𝑅
Attention here! Why “plus-minus” sign for k0? Note k0 depends from Ic. To use “invariant” feature
we are forced to lock the same currents directions for all topologies as we did during Boost and
Buck-Boost examples in voltage mode. It means if the Buck topology will be discussed as reference
example for the peak current mode control then the “common” current comes out the PWM Switch
(Fig. 3.3) and k0 will have positive sign. Vice versa for Boost in which k0 will have negative sign (Ic
comes into the PWM switch – Fig. 3.3 !)
𝜕 |𝑆| ∙ 𝑇 𝑇 2∙𝑉 ∙𝑇 𝑇 |𝑆| ∙ 𝐿 1
→− (𝐼 )𝑉 = − − − + ∙𝑉 = + −𝐷 ∙𝑉 =𝑔 ∙𝑉
𝜕𝑉 𝑅 ∙ |𝑉 | 2 ∙ 𝐿 2 ∙ 𝐿 ∙ 𝑉 𝐿 𝑅 ∙ |𝑉 | 2
Collecting all current components in a compact form we get (using Vorperian notation):
−𝐼 𝐼
𝑘 = 𝐷; 𝑔 = ; 𝑔 =
𝑉 𝑉
1 𝑇 |𝑆| ∙ 𝐿 1 𝑆∙𝑇 ∙𝐷 𝑇 ∙𝐷
𝑘 = 𝑔 = + −𝐷 𝑔 = −
𝑅 𝐿 𝑅 ∙ |𝑉 | 2 𝑅 ∙ |𝑉 | 2∙𝐿
𝑘 relates the active terminal current 𝐼 to the control voltage 𝑉 . It can be modeled as a
voltage controlled current source.
𝑔 relates the active terminal current 𝐼 to its active to passive voltage 𝑉 . It can be modeled
as a resistor 𝑅 = 1/𝑔 placed between the active to passive terminals.
𝑔 relates the active terminal current 𝐼 to the common to passive voltage 𝑉 . It can be
modeled as a voltage controlled current source.
𝑘 relates the common terminal current 𝐼 to the control voltage 𝑉 . It can be modeled as a
voltage controlled current source.
𝑔 relates the common terminal current 𝐼 to its common to passive voltage. It can be
modeled as a resistor R0=1/g0 placed between the common to passive terminals.
𝑔 relates the common terminal current 𝐼 to the active to passive voltage 𝑉 . It can be
modeled as a voltage controlled current source.
140
Until now we have said nothing about the external compensation slope S. Referring to A-Z 2nd
p. 498, we simply recall S value cannot be randomly chosen. In order to avoid subharmonic
instability it is mandatory to have S >50%∙S2. Once the converter design “skeleton” has been
designed, S2=VOFF/L is calculated, then S can be selected accurately. A simple way to include
“subharmonic issues” into the model is shown by CSUBH capacitor in Fig 3.17. It models the
converter dynamics in the vicinity of half the switching frequency and its value is calculated to
give a peak resonance at fsw/2 with the main power stage inductor:
𝑓 1 1
= →𝐶 =
2 2𝜋 𝐿 ∙ 𝐶 𝐿 ∙ (𝑓 ∙ 𝜋 )
𝑓 1
=
2 2𝜋 𝐿 ∙ 𝐶
Figure 3.17: Universal PCMC small signal model (Buck CCM assumed and subharmonic effect
taken into account)
vc^
A C
+ - -
- + +
Ri Ro Cs
ki gr gf k0
Figure 3.18: Universal Peak Current Mode small signal model in TINA (or Spice derived) - CCM
assumed
141
3.5 PWM switch at work: Peak current mode CCM assumed
Input parameters:
fs=[Hz]=100e3; Vin [V]=10; Vout [V]=5; L[H]=100e-6; C[F]=100e-6; ESR_C[Ω]=100e-3;
Rmap[Ω]=250e-3; R[Ω]=1; kext=0.05;
Output parameters:
𝑉
𝐷= = 0.5
𝑉
𝑉 = 𝑉 − 𝑉 = 5;
𝑉 = −𝑉 = −5
|𝑆1| = |𝑉 /𝐿| ∙ 𝑅 = 12.5𝑘
|𝑆2| = |𝑉 /𝐿| ∙ 𝑅 = 12.5𝑘
𝑆 = 𝑘 ∙ 𝑆2 = 2.5𝑘
1
𝐶 = = 101.3𝑛𝐹
𝐿 ∙ (𝑓 ∙ 𝜋 )
𝑉 1 𝑉
𝑉 =𝑅 ∙ (𝐼 +𝐼 )=𝑅 ∙ + ∙ ∙𝐷∙𝑇 = 1.28
𝑅 2 𝐿
a) Identify non linear elements and define active, passive, and common terminals
Same as for voltage mode
c) Average the voltage waveforms across the PWM Switch and built it
𝑉 = 𝑉 − 𝑉 = 5𝑉
𝑉 = 𝑉 = 10𝑉
𝑉 =𝑉 = 3
A C
+ - -
- + +
Ri -3.96 Ro 100 Cs 101.3212n
U1 CCM CM
Vout 5V
vc^
Ia 2,5A L 100u Ic 5A
A CCM CM C
P
Vin 10
R1
Ip
2,5A
The control to output TF equation reported on page A-Z 2nd p. 503 (simple case) doesn’t take
into account the “subharmonic capacitor” inserted in the model. The Buck control to output TF
can be then optimized as follows for the Buck converter:
𝑠
1+ 1
𝜔
𝐺 (𝑠 ) = 𝐺 ∙ ∙
𝑠 𝑠 𝑠
1+ + +1
𝜔 𝜔 𝜔 ∙𝑄
143
1
(𝐴) →
1 𝑚 − 0.5 − (𝑚 ∙ 𝐷)
𝐴 + 1 1
𝑅 𝐿∙𝑓
𝐺 = → ; 𝜔 = ; 𝜔 =
𝐵 (𝐵) → 𝑅 𝐴∙𝐶 𝐸𝑆𝑅 ∙ 𝐶
𝑆 1 1
𝑚=1+ ; 𝜔 = ; 𝑄=
𝑆 𝐿∙𝐶 𝜋 ∙ [𝑚 ∙ (1 − 𝐷) − 0.5]
Closed form equation for the line to output TF is quite similar to the control to output one: only
the module changes.
𝑠
ℎ∙ 1+𝜔 ℎ = 𝑔 ∙ 𝑅 ||𝑅 𝑔 , 𝑅 𝑓𝑟𝑜𝑚 𝐴𝐶 𝑚𝑜𝑑𝑒𝑙
𝐺(𝑠) = →
𝑠 𝑠 𝑠 𝜔 , 𝑚, 𝜔 , 𝑄 𝑠𝑎𝑚𝑒 𝑎𝑠 𝐺 (𝑠)
1+𝜔 ∙ 𝜔 + 𝜔 ∙𝑄 +1
T 400.00k
Magnitude (dB)
200.00k
Imaginary part
ESR 𝝎𝒑 𝒑𝒐𝒍𝒆
0.00
Phase (deg)
-200.00k
-400.00k
-200.00k -100.00k 0.00 100.00k
Real part
Figure 3.21: Gc2o optimized and Gauss chart. Note complex poles related to the subharmonic
capacitor insertion.
Last step before closing the Buck case study is to quantify how much linear results differ from
cycle by cycle results. If this approach is correct we should get a quite similar shape in both
cycle by cycle and linear waveforms. A comparison using Peak Current mode Control for the
Buck case study is shown below:
144
U1 CCM CM
Vout 5,000002V
vc^
Ia 2,500146A L 100u Ic 5,000252A
A CCM CM C
P
Vin 10
R1
Ip
2,500106A
C2 352p
R3 666
C1 44,7n R2 356
R1 10k
C3 1,6n
Vc 1,281313V OP1
SW-SPDT1
4
2
-
6
+ 3
+
Rd 10k
SW-SPDT2 L1 1k
7
V1 2,5
C4 1k
V2 20
+
AC
Vout
-
Ia L 100u Ic
+ Voff 1
SW3
Vin 10
R4 1
-
C 100u
D1 Dbreak
R1
Vup
Ip
-
t
C2 352p
R3 666
C1 44.7n R2 356
Sense +
Sense -
Vth
Ilim
R1 10k
VrampComp
C3 1.6n
U3 PWM CM
CS-
CS+
Ilimit
Vthreshold
VrampComp
Vup
Rd 10k
Vhi I sense
V-
Vref Vref
Vsw or Vgnd
Vcomp Vcomp
Vsynch
Flip Flop
R5 100k
Vshift Protect.
Vgate
Vclk
VrampComp
Vclk
Vreset
Vgate
Vref
Vcc+
Vclk
Vth Ilim
Cycle by cycle Peak Current
Mode controller (see appendix)
+
Figure 3.22: Cycle by cycle model for Buck converter case study
145
T6.00 1.50
Vout cycle by cycle Vctrl - VrampComp
Vout Linear IL x Rmap
Ic Vctrl
Figure 3.23: Left: cycle by cycle model Vs Linear (Vout =5V), curves are perfectly superimposed.
Right : Control Voltage versus "mapped" inductor current (note the ramp compensation
presence slightly modifies the control voltage shape)
Note also: subharmonic capacitor presence is crucial. It alters the input impedance magnitude
significantly at fsw and fsw/2 respect the one you get neglecting Cs. The input filter design is then
affected. We strongly recommend to give the right attention to the input impedance (with Cs
placed) during the input filter design step.
146
BOOST EXAMPLE BUCK-BOOST EXAMPLE
Input parameters: Input parameters:
fs[Hz]=100e3; Vin [V]=10; Vout [V]=40; L[H]=100e-6; C[F]=470e- fs[Hz]=100e3; Vin [V]=50; Vout [V]=30; L[H]=75e-6; C[F]=10e-6;
6; ESR_C[Ω]=100e-3; Rmap[Ω]=250e-3; R[Ω]=20; kext=100e-3; ESR_C[Ω]=100e-3; Rmap[Ω]=1; R[Ω]=10; kext=0;
Output parameters: Output parameters:
𝑉 𝑉
𝐷 =1− = 0.75 𝐷= = 375𝑚
𝑉 𝑉 −𝑉
𝑉 = 𝑉 = 10; 𝑉 = 𝑉 = 50;
𝑉 = 𝑉 − 𝑉 = −30 𝑉 = −𝑉 = −30
|𝑆1| = |𝑉 /𝐿| ∙ 𝑅 = 25𝑘 |𝑆1| = |𝑉 /𝐿| ∙ 𝑅 = 666,66𝑘
|𝑆2| = |𝑉 /𝐿| ∙ 𝑅 = 75𝑘 |𝑆2| = |𝑉 /𝐿| ∙ 𝑅 = 400𝑘
𝑆 = 𝑘 ∙ 𝑆2 = 7.5𝑘 𝑆 = 𝑘 ∙ 𝑆2 = 0
1 1
𝐶 = = 101.3𝑛𝐹 𝐶 = = 135.1𝑛
𝐿 ∙ (𝑓 ∙ 𝜋) 𝐿 ∙ (𝑓 ∙ 𝜋)
𝑉 1 𝑉 𝑉 1 𝑉
𝑉 =𝑅 ∙ (𝐼 +𝐼 )=𝑅 ∙ + ∙ ∙𝐷∙𝑇 = 2.1 𝑉 =𝑅 ∙ (𝐼 +𝐼 )=𝑅 ∙ + ∙ ∙𝐷∙𝑇 = 6.05
𝑅 ∙ (1 − 𝐷) 2 𝐿 𝑅 ∙ (1 − 𝐷) 2 𝐿
a) Identify non linear elements and define active, passive, and a) Identify non linear elements and define active, passive, and common
common terminals. Pay attention to the reference signs used for terminals. Pay attention to the reference signs used for the currents
the currents coming in or out to the PWM switch cell used for the coming in or out to the PWM switch cell used for the Buck Converter!
Buck Converter! They need to be the same to profit from the fully They need to be the same to profit from the fully invariant feature.
invariant feature. Refer to Fig 3.3. Refer to Fig 3.3
147
BOOST EXAMPLE BUCK BOOST EXAMPLE
b) Average the current waveforms across the PWM Switch b) Average the current waveforms across the PWM Switch
𝑉 𝑆 1 𝑉 𝑆 1
𝐼 =− − ∙ 𝐷 ∙ 𝑇 − ∙ 𝑆 ∙ (1 − 𝐷) ∙ 𝑇 = −8𝐴 𝐼 =+ − ∙ 𝐷 ∙ 𝑇 − ∙ 𝑆 ∙ (1 − 𝐷) ∙ 𝑇 = 4.8𝐴
𝑅 𝑅 2 𝑅 𝑅 2
𝐼 =𝐼 = 𝐼 ∙ 𝐷 = −6𝐴 𝐼 =𝐼 = 𝐼 ∙ 𝐷 = 1.8𝐴
𝐼 =𝐼 = 𝐼 (1 − 𝐷) = −2A 𝐼 =𝐼 = −𝐼 (1 − 𝐷) = −3𝐴
c) Average the voltage waveforms across the PWM Switch and built it c) Average the voltage waveforms across the PWM Switch and built it
𝑉 = −𝑉 = −10𝑉 𝑉 = 𝑉 = 50𝑉
𝑉 = −𝑉 = −40𝑉 𝑉 = 𝑉 + 𝑉 = 80𝑉
𝑉 = 𝑉 − 𝑉 = −30𝑉 𝑉 = 𝑉 = 30𝑉
𝑘 = 𝐷 = 750𝑚 𝑘 = 𝐷 = 375𝑚
−𝐼 1 −𝐼 1
𝑔 = = −151𝑚 → 𝑅 = = −6.62 𝑔 = = −22.5𝑚 → 𝑅 = = −44,444
𝑉 𝑔 𝑉 𝑔
𝐼 𝐼
𝑔 = = 201.4𝑚 𝑔 = = 60𝑚
𝑉 𝑉
𝜕 1 𝜕 1
𝑘 =→ (𝐼 ) ∙ 𝑉 = − = −4 𝑘 =→ (𝐼 ) ∙ 𝑉 = + =1
𝜕𝑉 𝑅 𝜕𝑉 𝑅
𝑇 |𝑆| ∙ 𝐿 1 1 𝑇 |𝑆| ∙ 𝐿 1 1
𝑔 = + − 𝐷 = −17.5𝑚 → 𝑅 = = −57.14 𝑔 = + − 𝐷 = 16.66𝑚 → 𝑅 = = 60
𝐿 𝑅 ∙𝑉 2 𝑔 𝐿 𝑅 ∙𝑉 2 𝑔
𝑆∙𝑇 ∙𝐷 𝑇 ∙𝐷 𝑆∙𝑇 ∙𝐷 𝑇 ∙𝐷
𝑔 = − = −22.5𝑚 𝑔 = − = 9.375𝑚
𝑅 ∙𝑉 2∙𝐿 𝑅 ∙𝑉 2∙𝐿
148
BOOST EXAMPLE BUCK-BOOST EXAMPLE
vc^ vc^
A C A C
+ - - + - -
Ri -6.62
- + + Ri -44,4444
Cs 101.3212n - + +
Ro -57 Cs 135n
AC three terminal cell for the case study AC three terminal cell for the case study
vc^ 6,05
+
+
Vc 2.1
Vout -30,000012V
U1 CCM CM Vout 40,07V U1 CCM CM
vc^
vc^
Ic -8,03A
A CCM CM C
Ip -2A
A CCM CM C
C 470u
P
P
C 10u
R 20
R 10
Vin 50
L 75u
Vin 10
Ia
ESR_C 100m
-6,02A
ESR_C 100m
4,8A
Ic
DC bias point correctly stored with AC model
DC bias point correctly stored with AC model
149
BOOST EXAMPLE BUCK-BOOST EXAMPLE
T 400.00k T 400.00k
Highly unstable:
Subharmonic poles
real part positive poles!
200.00k
200.00k
Imaginary part
Imaginary part
0.00
0.00
ESR RHP
-200.00k
-200.00k
-400.00k
-40.00k -20.00k 0.00 20.00k 40.00k 60.00k 80.00k 100.00k -400.00k
Real part -2.00M -1.00M 0.00 1.00M
Real part
Magnitude (dB)
Magnitude (dB)
Phase (deg)
Phase (deg)
Gc2o Analysis
Gc2o Analysis: the effect of the ramp comp. amplitude
150
Function Buck Boost Buck Boost (inverting)
∙ ∙
𝐺 (𝑠) = 𝐺 (𝑠) = 𝐺 (𝑠) same Boost expression
∙ ∙
∙ ∙
Control ( . )∙( )
( . ) ( ∙ ) ∙
To ∙
( . )∙( ) 𝐺 =− =−
𝐺 = = ;𝜔 = ; 𝐺 = = ∙
;
Output ∙
𝜔 = ; 𝜔 = ; ∙( )
∙ ∙
)
𝜔 = ;𝜔 =
∙( ∙ ∙
𝜔 = ∙
; 𝜔 =
𝑚 =1+ ;𝑄= ∙[ ∙( ) . ] 𝜔 ,𝜔 , 𝑄, 𝑚 same as Buck
𝑚, 𝜔 ,𝜔 , 𝑄 𝑠𝑎𝑚𝑒 𝑎𝑠 𝐵𝑢𝑐𝑘
∙ ∙ ∙
𝐺(𝑠) = 𝐺 (𝑠) = 𝐺 (𝑠) = ∙
∙ ∙ ∙
∙ ∙ ∙
𝑚, 𝜔 , 𝜔 ,𝜔 , 𝑄 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑎𝑏𝑜𝑣𝑒
∙ ∙( )
𝜔 = 𝜔 =
∙( ) ∙
𝑚, 𝜔 , 𝜔 , 𝜔 , 𝑄 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑎𝑏𝑜𝑣𝑒 𝑄 ∙𝜔 =
∙
∙ ∙
𝑔 , 𝑔 , 𝑘 , 𝑅 , 𝑅 𝑓𝑟𝑜𝑚 𝐴𝐶 𝑚𝑜𝑑𝑒𝑙
𝑚, 𝜔 , 𝜔 , 𝜔 , 𝑄 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑎𝑏𝑜𝑣𝑒
𝑔 , 𝑔 , 𝑘 , 𝑅 , 𝑅 𝑓𝑟𝑜𝑚 𝐴𝐶 𝑚𝑜𝑑𝑒𝑙
Table 3.4: Collecting closed form equations for Basic topologies (Peak Current Mode & CCM assumed
151
We did our job even in terms of Zin and Zout and we confirm closed form equations exist for
peak current mode as well. The drawback is the final form for both is complex (with Cs capacitor
placed) so we invite the reader to avoid loosing energy on this: once the linear modeling
concept has been grasped use it to get the needed transfer function you need in one mouse click.
Flyback and Forward linear models can be built using the same approach we did in voltage
mode and referring to Figure 3.2 A-Z 2nd p.130. Simply mapping all the secondary side
components to the primary side we can cut out the main power transformer getting an
equivalent buck (forward case) or an equivalent buck-boost (flyback case). In particular, given
specific input parameters, this time including primary turns Np and secondary turns Ns with
n=Np/Ns, all we need to do is to map the isolated converter into the equivalent one seen on the
primary side. Following equations are valid for both cases: Flyback and Forward. Only the
inductor and mapping resistor values change between both topologies.
𝑉 =𝑉 ∙ 𝑛; 𝐶 = ; 𝐸𝑆𝑅 = 𝐸𝑆𝑅 ∙𝑛 ; 𝑅 =𝑅 ∙𝑛 .
𝐿 =𝐿 ∙𝑛 𝑤ℎ𝑖𝑙𝑒 𝐿 =𝐿
Control 𝟏 𝟏
G (s) = ∙G (s) G (s) = ∙G (s)
To 𝐧 𝐧
Output with positive sign
Line 𝟏 𝟏
G (s) = ∙G (s) G (s) = ∙G (s)
To 𝐧 𝐧
Output with positive sign
Output Impedance 1 1
Z (s) = ∙Z (s) Z (s) = ∙Z (s)
𝐧𝟐 𝐧𝟐
Table 3.5: Collecting closed form equations for Flyback and Forward in Peak Current Mode
152
With this table we almost close this chapter inviting the reader to spend the needed time on
this topic. It is not complex and once you get it you will have a greater perception about what
are you doing.
The input filter design has been widely discussed in this book in the last chapters. These few
pages present an alternative strategy showing how to properly design an Input filter by using
the PWM switch. Note the Input filter is mainly composed by two sections: differential mode
(DM) and common mode (CM). The CM filter fights the noise due to parasitic elements and
parasitic paths. For its design we address the reader to focus on the chapter 16. By the other
hand the DM filter design can be easily performed using the linear model. Two things we need
to keep in mind during the filter design:
1. It needs to provide the required attenuation at the frequency under study. Assume for
example the same Buck Converter in Peak Current Mode (Figure 3.20) needs to be
compliant to CISPR 22 class B limits. The conducted emission frequency range has a
lower limit at 150kHz. All we need to do is to estimate the required attenuation to push
down the limit the “first” harmonic who falls inside the CISPR frequency spectrum.
2. The filter, as external element, shall not alter the power stage functionality. To fix this
issue prof. Middlebrook suggests to design a filter that imposes an output impedance
Zout_filter lower enough respect the converter input impedance Zin_DCDC around the critical
area. The critical area is identified by the frequency range in which Zout_filter is max and
or Zin_DCDC is min. It will be clarified during the example study.
Let’s focus on the point 1. We know the peak current into the switch is almost equal to the load
current (Refer to Fig 24). It is 5A in our example. Its Fast Fourier Transform can easily evaluated
by simulation.
T 6.00 T 4.00
2.00
"first" harmonic inside
CISPR spectrum
2.00 (0.35A @ 200kHz)
1.00
0.00 0.00
4.94m 4.96m 4.98m 5.00m 0 250k 500k 750k 1M
Time (s) Frequency (Hz)
The “first” harmonic who falls inside the CISPR22 Conducted Emission spectrum [150kHz –
30MHz] is at 2∙fsw or 200kHz and has a peak of 0.35A. The CISPR22 limit at 200kHz is 46dbμV
and its decimal (natural) value is: 10(46/20-3) = 0.2mV. This is the voltage limit which includes
both effects Common Mode and Differential Mode. At the same time we know the DM load
impedance typically imposed by the LISN is around 100Ω although the DM voltage is measured
on the half of the full resistor (50Ω); while the CM load impedance typically imposed by the
153
LISN is around 50Ω although the CM voltage is measured on the half of the full resistor (25Ω).
It means the line and neutral voltages can be written as:
VL= 25∙Icm + 50∙Idm = 0.2mV (max admitted @ 200kHz by CISPR 22 class B)
VN= 25∙Icm - 50∙Idm = 0.2mV (max admitted @ 200kHz by CISPR 22 class B)
If we assume the noise splits equally in both components (CM and DM) then the maximum
voltage level admitted by each noise DM or CM is 0.1mV. Focusing on DM if we know the voltage
admitted and the equivalent resistance seen by the LISN we know the maximum limit in current
that is 0.1mV/50=2μA. The attenuation in current is then:
𝐼 2 ∙ 10
𝐴𝑡𝑡 = = = 5.71 ∙ 10 = −105𝑑𝑏
𝐼 @ 0.35
The filter resonant frequency can be evaluated as:
( ) 1
𝑓 = 10 = 10 = 478 𝐻𝑧 =
2𝜋 ∙ 𝐿 𝐶
𝑓 = →𝐶 = = = 83.2𝜇𝐹; 𝐿 = = 1.33 𝑚𝐻
∙ ∙ ∙ ∙
∙
T 100.00
Crosscheck Attenuation
before damping
L1 1,33m 0.00
Gain (dB)
Iin_DCDC
-100.00
I_LISN
C1 83,2u
T 30.00
20.00
interaction
vc^
L1 1,33m L 100u
A CCM CM C
P
Sw
R1
V1 10 10.00
+
Z ZM1 1k
0.00
100.00 1.00k 10.00k 100.00k 1.00M
Frequency (Hz)
The input filter skeleton is ready, it’s time to understand why Middlebrook criteria is essential.
The input filter presence as “external elements” can alter the correct power stage functionality.
It means if we plot the control to output TF with the input filter placed we get a completely
154
different result respect the one we got before. The consequence is that the compensation
network we placed fails and the system becomes unstable. To avoid this phenomena damping
networks needs to be designed. Do never neglect this step! Different damping network exists and
in our example we proceeded placing a RC dumping network.
Every dumping network is designed in order to guarantee Zout_filter lower enough respect the
converter input impedance Zin_DCDC. If the difference between both curves increases the same
happen to the electrical values and then their volume increases. A good margin between
impedances is around 20db.
Sense +
CCVS1 250m
Sense -
+
Vout
-
L2 1,33m Ia L 100u Ic
SW3 1
R4 1
R7 500m
C 100u
D1 Dbreak
R1
Vin 10
C5 83,2u
Vup
Ip
C4 8,58m
-
t
C2 352p
R3 666
C1 44,7n R2 356
VrampComp
R1 10k
Sense +
Sense -
Input filter
Vth
Ilim
C3 1,6n
U3 PWM CM
CS-
CS+
Ilimit
Vthreshold
VrampComp
Vcomp
PWM CM
Rd 10k
Vcc Vcc+
V-
EA V- V-
Vup Vhi I sense Vref Vref
Vsw or Vgnd
Vcomp Vcomp
Vsynch
Flip Flop
R5 100k
Vshift Protect.
Vgate
Vclk
Vgate
Vclk
VrampComp
Vreset
Vgate
Vref
Vcc+
Vclk
R8 5k Vth:=10m
Vth Ilim
Rsense:=250m
+
V3 10m V6 40m
C6 100n
15 12 100k Ilimit:=Vth/Rsense
V4 2,5
0 VG2 0 Ilimit=[40m]
{set Ilimit as calculated}
T 10.00
0.00
7.00m 7.50m 8.00m 8.50m 9.00m
Time (s)
Figure 3.26: Understanding Dumping Effects and stability criteria for the Buck Example in Peak
Current Mode
155
In this example we guaranteed 5db margin between impedances (it’s not the best we know, but
remember initial buck values have been choose randomly). Note the input filter parameters are
very big. Typically this design step needs to be refined and sometimes you don’t get the best
filter in one shoot. For example placing a double LC cell at the input section could be a good way
to go. This chapter ends here avoiding reporting damping network design for sake of simplicity
but a related literature exists and it is quite precise. We invite the reader to spend some time
on this topic checking how getting a power stage linear model helps in different directions.
In this appendix Nicola Rosano provides guidelines to build a simple spice model (and derived
simulator) about a peak current mode control (PCMC). The basic concept behind the PCMC
versus the voltage mode control (VMC) is related on how to properly set the duty cycle which
will drive the power switch. Figure 3.27 shows the basic difference between both methods.
Figure 3.27: Voltage Code Control (VMC) VS Peak Current Mode Control (PCMC)
The voltage mode control is the simpler control possible. The sensed voltage is compared
versus an internal ramp. The output of the comparator defines the duty cycle duration.
156
The current mode by its hand uses the inductor current as a ramp. The most important
difference is that the VMC ramp has an amplitude greater than control voltage. The PCMC ramp
is defined by the inductor current and its maximum (the peak value) defines the control voltage.
An internal clock working at the switching frequency periodically sets the Flip Flop (FF) output
high allowing the power switch current into the output inductor to start ramping up. That
current is instantaneously mapped into an “equivalent voltage” thanks to Rmap (total
transresistance). That “mapped current” is then compared versus the control voltage that has
a maximum limit set to limit maximum inductor current and set externally to the IC.
When the sensed voltage tries to be greater than the designed limit current, the output voltage
of the comparator becomes high, resetting the FF, and switching off the power switch.
Which control is better? Note also while operating waveforms (time domain) are identical
frequency waveforms (bode plots) differ.
CCM = Continuous
conduction mode
PRO CONS
DCM = Discontinuous
conduction mode
Does not need the inductor No inherent input line
current information. (Can feedforward (weak audio
go to very small duty- susceptibility). Cannot use
ratio) small bulk capacitor, bad
VOLTAGE MODE ripple rejection
CCM operation without
sub-harmonic instabilities 2nd-order system in CCM:
(No need for slope mode transition can be a
compensation, current problem
limit unaffected)
157
The equivalent model of the peak current mode control will be explained in details.
1. The OPAMP. Three pins have been mapped externally: V- (inverting input) attached to
the resistors feedback partition; Vref is attached to a simple voltage generator; and
Vcomp that allows to place the compensation network between Vcomp and V-. R10 and
C5 form a low pass filter and a soft start function for V+ (non inverting input).
Vcomp
OP1 !OPAMP
V-
4
2
-
6
Vref
+ 3
+
7
R10 1k
C5 2n
Z
2. The current sense. This section has been modelled with a voltage controlled voltage
source and requires two pins to work properly. It allows a floating sense in every point
of the circuit. Note: the full peak current mode model is GND referenced then the current
sense section simply maps the sensed voltage into a GND referenced voltage.
3. The Rmap stage. This section has been modelled with a custom voltage controlled
voltage source. Simply speaking the inductor current will be mapped into a specified
voltage but the amplitude of that voltage is not 1:1. It could be higher or lower than the
amplitude of the inductor current. The CS4 block 3.30 simply increases or decreases the
output of the EA properly scaling its value in order to have a coherent comparison
between the mapped inductor current and the control voltage set externally. Typically
every IC has an internal threshold Vth than for a specified Vth set Ilimit properly as:
𝑉
𝐼 =
𝑅
158
Example:
CS+
CS-
If Ipeak = 1A and it is mapped
into an equivalent voltage of
+
VCVS2 1
-
0.3V we get:
Vcomp
+
-
OP1 !OPAMP
4
2
CS4
N1
6
-
Vref
If Vth = 50mV than Ilimit will be
Out(V) N2 +
N3 + 3
Vth/Rmap = 166.66m
7
R10 1k
C5 2n
(Vth and Ilimit are set externally)
Vthreshold
Z
Ilimit
4. The Ramp compensation stage. If the power stage is designed to work in CCM and with
a duty near to 0.5 it can be affected by subharmonic instabilities. To avoid that
phenomena a ramp compensation method will be used. A ramp is properly subtracted
from the control voltage to provide the slope compensation. The block CS9 requires one
pin to work properly. Set the ramp externally.
CS+
CS-
+
VCVS2 1
Vcomp
+
OP1 !OPAMP
V-
4
2
CS9 CS4 -
6
N1 N1 Vref
Out(V) N2 Out(V) N2 + 3
N3 N3 +
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp
Z
Ilimit
159
5. The ideal comparator allows a comparison between the mapped voltage (current into
the switch) and the control voltage previously set externally. That block is GND
referenced and requires one pin (Vcc) to work properly. Set that value externally.
CS+
CS-
+
VCVS2 1
-
Vcc
Vcomp
Z
-
OP1 !OPAMP
V-
4
2
V+ CS9 CS4 -
Vout Id. Comp. 6
V- N1 N1 Vref
Out(V) N2 Out(V) N2 + 3
+
VL
N3 N3
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp Z
Ilimit
Figure 3.31: Adding the ideal comparator
6. The protection stage is used to set a maximum duty cycle as specified on the PWM
controller datasheet. If the bias point will require a duty cycle greater than the one
specified externally, the flip flop will be automatically switched off and the output
voltage will drop. That block requires one pin (Vreset) to work properly. Just set that
signal externally.
CS+
CS-
+
VCVS2 1
-
Vcc
Vcomp
Z
+
OP1 !OPAMP
Vreset
V-
4
2
N1 V+ CS9 CS4 -
Out(V) N2 Vout Id. Comp. 6
N3 V- N1 N1 Vref
Z Out(V) N2 Out(V) N2 + 3
+
VL
N3 N3
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp
Z
Vgate
Ilimit
160
7. The Flip Flop stage. It has been built using discrete elements. It allows to properly turn
on and turn off the power switch. That block requires two pins (Vgate, and Clock) to
work properly. Vgate is the required amplitude of the signal that will drive the power
switch (typically 5V for Gan FETs and 12V for MOS FETs); Clock is a pulse signal that
periodically switches the Flip Flop ON when its state is high. Both signals will be set
externally.
CS+
CS-
+
VCVS2 1
-
Vcc
Vcomp
Z
-
OP1 !OPAMP
Vreset
U11 Id. Comp.
CS2 CS5
VH
V-
4
2
N1 N1 V+ CS9 CS4 -
Out(V) N2 Out(V) N2 Vout Id. Comp. 6
N3 N3 V- N1 N1 Vref
Z Out(V) N2 Out(V) N2 +
R11 1T
3
+
VL
N3 N3
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp
CS3 Z
Vgate
N1
Out(V) N2
Ilimit
N3
R6 1T
Vclk
At this point the Peak PWM Current Mode Control (PCMC) can be considered completed in its
simple version. Added modifications have been done in order to improve its capabilities.
8. Creating synchronous signals. If a synchronous DC/DC converter needs to be designed
the following circuit can be cascaded to the previous one. The Flip Flop output Va is split
in two paths: direct path and inverted path. The inverted signal has been created with a
NAND port and each signal (direct and inverted) has been shifted of about a specified
delay set by the user. The last two stages are linked to Vgate (the same of the previous
Flip Flop section) in order to manually select Vgate amplitude.
161
CS+
CS-
+
VCVS2 1
-
Vcc
Vcomp
Z
-
OP1 !OPAMP
Vreset
U11 Id. Comp.
CS2 CS5
VH
V-
4
2
N1 N1 V+ CS9 CS4 -
Out(V) N2 Out(V) N2 Vout Id. Comp. 6
N3 N3 V- N1 N1 Vref
Z Out(V) N2 Out(V) N2 +
R11 1T
3
+
VL
N3 N3
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp
CS3 Z
Vgate
N1
Out(V) N2
Ilimit
N3
R6 1T
Va
Vclk
U1 100k U5 AND2
* Vc 1
CS6
VCV1 1 Vhi
AND2 3 N1
+ +
2 N2 Out(V)
Vsw or Vgnd
- -
Inverter
U7 NAND2
Vb
U8 AND2
Vd
U2 100k
1 * CS7
VCV2 1 Vsynch
NAND2 3 1
2 AND2 3 N1
+ +
2 N2 Out(V)
- -
162
9. Creating shifted signals. If a phase shift control is needed (half bridge, push pull and full
bridge topologies) the following circuit can be used. The direct line is the same of the
previous stage. The shifted line is created sending the Flip Flop output Q to a delay block.
Note the CS1 block is crucial! It allows a shift process if, and only if, the Flip flop output is
high. This is due in order to avoid phase shift errors.
CS+
CS-
+
VCVS2 1
-
Vcc
Vcomp
Z
-
OP1 !OPAMP
Vreset
U11 Id. Comp.
CS2 CS5
VH
V-
4
2
N1 N1 V+ CS9 CS4 -
Out(V) N2 Out(V) N2 Vout Id. Comp. 6
N3 N3 V- N1 N1 Vref
Z Out(V) N2 Out(V) N2 +
R11 1T
3
+
VL
N3 N3
7
R10 1k
C5 2n
VF1
Vthreshold
VrampComp
CS3 Z
Vgate
N1
Out(V) N2
Ilimit
N3
R6 1T
Vclk
U1 100k U5 AND2
*
1
CS6
VCV1 1 Vhi Direct line
AND2 3 N1
1 * CS7
VCV2 1 Vsynch
NAND2 3 1
2
2
AND2 3 N1
N2 Out(V) + +
Synchronous line
- -
163
To validate the model a half bridge current doubler topology has been simulated. The PCMC
block in its compact form is shown below:
VrampComp
Sense +
Sense -
Ilim
Vth
U3 PWM CM
Ilimit
CS-
CS+
Vthreshold
VrampComp
PWM CM Vcc Vcc+
EA V- V-
Vup Vhi I sense Vref Vref
Vsw/Vgnd Vsw or Vgnd
Vcomp Vcomp
Vsynch
Flip Flop
R4 1k
Vshift Protect.
Vgate
Vclk
VgateSHIFT Out Sect. Vreset
Vreset
Vgate
Vclk
VrampComp
Vreset
Vcc+
Vref
V4 500m Vth:=50m
+
Rsense:=1
+
V5 15
VG2 0 Ilimit:=Vth/Rsense
Dmax 0
Ilimit=[50m]
{set Ilimit as calculated}
Vclock
Figure 3.36: PCMC compact block (all the previous stages are placed internally)
Starting from the bottom of the block and moving in clockwise direction the following
parameters have been SET:
VGATE : 12V gate to source amplitude (MOSfet switch assumed)
VgateSHIFT : directly linked to the downside MOS gate (note it is GND ref.)
Vsynch : not used in half bridge topology then directly linked to GND trough a resistor
Vsw/GND : It is linked to “source” terminal if floating MOS are used
Vup : It is linked to “gate” terminal if floating MOS are used.
Vth : It is the PWM controller threshold. It is set to 50mV
Ilimit: dependent from Rmap. Here Rmap = 1 than Ilimit = Vth/Rmap = 50mV(A). It sets
the maximum peak current.
Sense+ : It is linked to the positive terminal of the sensing circuit
Sense- : It is linked to the negative terminal of the sensing circuit (float or GND).
164
VRAMPCOMP: it is used to setup a ramp compensation generator in order to avoid
subharmonic instabilities. It starts automatically when the output of the flip flop is high
then no phase errors will be present. For Ipeak = 1.55A, the ramp compensation
amplitude has been set to 0.5V with the same frequency as the clock signal.
Vcc: it supplies both the ideal comparator and the Error Amplifier EA. Vcc is set to 15V.
Note Vgate generator sets the Vgs amplitude not Vcc.
V- : directly linked to the resistor feedback partition
Vref : directly linked to a 0.5V generator (in order to get 1V output)
VCOMP: It is required to properly place a feedback compensation network between V COMP
and V-
Vreset : 200kHz pulse generator shifted by 0.45*Tsw with respect to the clock signal. It
resets the flip flop if the duty cycle exceeds 0.45.
D3
Sense +
R8 1
R9 1k
VL:7
T1 2N6755
CCVS1 1
Sense -
+
-
C1 5m
R2 10
L2 446.43n Vr
Vup AM1:5 TR1 62.5m Vout
SW-SPDT1
+
Vsw/Vgnd
Vsec:1
N1 N2 - L3 446.43n
L1 550.4u
V1 75 SW1
C3 62.5u
T2 2N6755 Voff 2
C2 5m
+
R3 10
R1 25m
VG1
-
D2
D1
VgateSHIFT
VrampComp
R5 10k
Sense +
Sense -
Ilim
Vth
Vcomp
C8 500p
V-
U3 PWM CM
CS+
Vthreshold
Ilimit
VrampComp
CS-
R7 10k
Vref Vref
Vsw/Vgnd Vsw or Vgnd
Vcomp Vcomp
Vsynch
Flip Flop
R4 1k
Vshif t Protect.
Vgate
Vclk
Vclk
Vreset
Vref
Vcc+
V4 500m Vth:=50m
+
Rsense:=1
+
V5 15
VG2 0
Dmax 0 Ilimit:=Vth/Rsense
Ilimit=[50m]
{set Ilimit as calculated}
Vclock
Figure 3.37: Half Bridge current doubler full circuit using PCMC
165
A 50% step load simulation is show below. The ramp compensation on the control voltage
slightly reduce the effective peak current.
T 2.00
Iswitch
-2.00
2.00
Vcontrol
0.00
1.60
Vout
800.00m
2.97m 2.98m 2.99m 3.00m 3.01m 3.02m 3.03m 3.04m
Time (s)
Notes:
No input filter present
Cout selected to meet the steady state ripple requirements only
Compensation network not optimized
166
3.8 Appendix 2 – GNU Octave / Matlab Voltage mode and Current mode Cscripts
167
VM-CCM Boost Converter
Try it for free on https://octave-online.net/)
if a>b
Analysis='BOOST_CCM_VM'
Ic=-Vout/R/(1-D);
PWM_Switch_param = ' '
%Calculating = 'AC Parameters'
Mod_gain=1/Vramp
VCVS1=Vap/D
CCCS1=Ic
CCCS2=D
VCVS2=D
D_VRAMP=D*Vramp
%Calculating Gc2o(s) Parameters in CCM from Maniktala textbook
Lx=L/((1-D)^2); w0=1/(sqrt(Lx*C)); Q=R/Lx/w0; wrc=1/R/C;
wRHP=R*((1-D)^2)/L; wz_esr=1/(C*ESR_C);
figure(1); hold on;
Gc2oManiktala=1/Vramp*(Vin/((1-D)^2))*(1-
s/wRHP)*(1+s/wz_esr)/((s/w0)^2+s/(w0*Q)+1); bode(Gc2oManiktala);
figure(2); hold on;
Gl2oManiktala=1/(1-D)/((s/w0)^2 +s/(w0*Q) +1 )*(1+s/wz_esr);
bode(Gl2oManiktala);
figure(3); hold on;
Zin=R*(1-D)^2*((s/w0)^2 +s/(w0*Q) +1 )/(1+s/wrc); bode(Zin); hold on
figure(4); hold on;
Zout=s*Lx*(1+s/wz_esr)/((s/w0)^2 +s/(w0*Q) +1 ); bode(Zout); hold on
else
StatusCheck = 'BOOST DCM VOLTAGE MODE' %not discussed in this book
End
168
VM-CCM Buck - Boost Converter
(Try it for free on https://octave-online.net/)
169
Try the C-Script code with five steps by using “Octave” on https://octave-online.net/. Octave is a free MatLab “equivalent”.
1. Create new file; 2. Paste Script; 3. Save ”.m”; 4. Run 5. Output
Figure 3.38: Use free octave at: https://octave-online.net/. Copy and Paste the script changing input parameters only.
170
VM-CCM Flyback Converter
%cleaning the memory and setup results in engineering format
clear all; close all; clc; format shortEng; s=tf('s');
171
VM-CCM Forward Converter
%cleaning the memory and setup results in engineering format
clear all; close all; clc; format shortEng; s=tf('s');
172
PCMC-CCM Buck Converter
(Try it for free on https://octave-online.net/)
174
PCMC-CCM Boost Converter
(Try it for free on https://octave-online.net/)
175
Zinden=(gf*gr+(R+Ri)/Ro/Ri/R);
Zmod=Zinnum/Zinden;
x=(L/Ro*(1/Ri+1/R)+gf*gr*L+Cs*(1-D)+C)/Zinnum;
y=(L*(Cs/Ri+C/Ro+Cs/R))/Zinnum;
z=Cs*L*C/Zinnum;
q=(Cs/Ri+Cs/R+C/Ro)/Zinden;
w=(Cs*C)/Zinden;
Zin=Zmod*(1+x*s+y*s^2+z*s^3)/(1+q*s+w*s^2);
bode(Zin); title('Zin'); grid on
figure(4)
hold on;
Zoutden=(-gf+gr+D*gf-D/Ro+1/Ro+1/Ri+1/R);
Zmodout=1/Zoutden;
x=L/Ro;
y=Cs*L;
q=(C+Cs*(1-D)+gf*gr*L+L/Ro*(1/Ri+1/R))/Zoutden;
w=L*(Cs/Ri+Cs/R+C/Ro)/Zoutden;
a=L*(Cs/Ri+Cs/R+C/Ro)/Zoutden;
v=Cs*L*C/Zoutden;
Zout=Zmodout*(1+s/wz_esr)*(1+x*s+y*s^2)/(1+q*s+w*s^2+v*s^3);
bode(Zout); title('Zout'); grid on
else
StatusCheck = 'BOOST_DCM_CM' %not discussed in this book
end
176
PCMC-CCM Buck Boost Converter
(Try it for free on https://octave-online.net/)
178
PCMC-CCM Forward Converter
(Try it for free on https://octave-online.net/)
%cleaning the memory and setup results in engineering format
clear all; close all; clc; format shortEng; s=tf('s');
Gc2oManiktalaOpt=G0/N*(1+s/wz_esr)*(1)/(1+s/wp1)/((s/w_subH)^2+s/(w_subH*Q)+1);
figure(1); hold on; grid on; bode(Gc2oManiktalaOpt); grid on; title('Gc2o TF')
h=Ro*R/(Ro+R)*gf;
GL2o=h/N*(1+s/wz_esr)/(1+s/wp1)/((s/w_subH)^2+s/(w_subH*Q)+1); hold on;
figure(2); hold on;
bode(GL2o); grid on; title('GL2o TF')
figure(3); hold on;
Zinnum=R/Ro+1;
Zinden=gf*gr*R+ki*gf+(R+Ro)/Ri/Ro;
179
Zmod=Zinnum/Zinden;
x=(L/Ro+R*(Cs+C))/Zinnum;
y=L*(C*R/Ro+Cs)/(Zinnum);
z=Cs*C*L*R/(Zinnum);
q=(gf*gr*L+gf*ki*R*C+L/Ro/Ri+R/Ri*(C+Cs))/(Zinden);
w=L*(gf*gr*C*R+C*R/Ro/Ri+Cs/Ri)/(Zinden);
v=1/Ri*(Cs*C*L*R)/(Zinden);
Zin=Zmod*(1+x*s+y*s^2+z*s^3)/(1+q*s+w*s^2+v*s^3);
bode(Zin); grid on; title('Zin TF')
figure(4); hold on;
Zoutmod=Ro*R/(R+Ro);
x=R*L/(Ro*R);
y=Cs*Ro*R*L/(Ro*R); z=0;
q=(Ro*R*C+Cs*Ro*R+L)/(R+Ro);
w=(R*C+Cs*Ro)*L/(R+Ro);
v=Cs*Ro*R*C*L/(R+Ro);
Zout=Zoutmod/(N^2)*(1+x*s+y*s^2+z*s^3)/(1+q*s+w*s^2+v*s^3)*(1+s/wz_esr);
bode(Zout);grid on; title('Zout TF')
else
StatusCheck = 'FORWARD_DCM_CM' %not discussed in this book
end
180
PCMC-CCM Flyback Converter
(Try it for free on https://octave-online.net/)
%cleaning the memory and setup results in engineering format
clear all; close all; clc; format shortEng; s=tf('s');
181
w0numQnum=w0num^2*(Cs*Ro)/(Ri*Ro*gf*gr+1);
GL2o=G0/N*(1+s/wz_esr)*((s/w0num)^2+s/(w0numQnum)+1)/(1+s/wp)/((s/w_subH)^2+s/(w
_subH*Q)+1);
bode(GL2o); grid on; title('GL2o')
figure(3)
hold on;
Zinnum=R/Ro*(1-D)+R/Ri+1-R*gf*(1-D-gr/gf);
Zinden=gf*gr*R+ki*gf+(R+Ro)/(Ri*Ro);
Zmod=Zinnum/Zinden;
x=(L/Ro*(R/Ri+1)+R*(C+Cs-Cs*ki+L*gf*gr))/Zinnum;
y=L*(C*R/Ro+Cs*R/Ri+Cs)/Zinnum;
z=(Cs*R*L*C)/Zinnum;
q=(L/Ri/Ro+R/Ri*(C+Cs)+L*gf*gr+C*R*gf*ki)/Zinden;
w=L*(C*R/Ri/Ro+Cs/Ri+C*R*gf*gr)/Zinden;
v=1/Ri*(Cs*R*L*C)/Zinden;
Zin=Zmod*(1+x*s+y*s^2+z*s^3)/(1+q*s+w*s^2+v*s^3);
bode(Zin); title('Zin'); grid on; title('Zin')
figure(4)
hold on;
Zoutden=(-gf+gr+D*gf-D/Ro+1/Ro+1/Ri+1/R);
Zmodout=1/Zoutden;
x=L/Ro;
y=Cs*L;
q=(C+Cs*(1-D)+gf*gr*L+L/Ro*(1/Ri+1/R))/Zoutden;
w=L*(Cs/Ri+Cs/R+C/Ro)/Zoutden;
a=L*(Cs/Ri+Cs/R+C/Ro)/Zoutden;
v=Cs*L*C/Zoutden;
Zout=Zmodout/N^2*(1+s/wz_esr)*(1+x*s+y*s^2)/(1+q*s+w*s^2+v*s^3);
bode(Zout); title('Zout'); grid on
else
StatusCheck = 'FLYBACK_DCM_CM' %not discussed in this book
end
182
CHAPTER 4 – ANALOG CONTROL LOOP THEORY
4.1 Introduction
Scouring the enticing landscape of control loop theory for enlightenment, we can often stumble
and lose our way from seemingly contradictory statements such as: “high gain reduces the effect
of disturbances; therefore, please increase the gain”. But in the very same breath: “high gain
causes oscillations; so reduce the gain”.
Bewildering!?
Here’s another example: “high gain improves load regulation; therefore, you need to increase
the gain”. However, “voltage positioning improves load regulation, and for that reason, you need
to reduce the gain”. “Oh, by the way, don’t forget to set the phase margin to greater than 45°.
Worst-case 50°.” “Remember, phase margin really just needs to be greater than zero to avoid
oscillations.”
So why this one? https://www.powersystemsdesign.com/articles/interpreting-loop-gain-
measurements/18/7041: “As with voltage-mode control, we want to have a phase margin in
excess of 50 degrees. The plot is continued well past crossover to make sure there is sufficient
gain margin (what the gain is when the phase margin hits zero.)”
You ask helplessly: “Why 45°, or why 50°? Make up your mind please!”
“In fact, smaller the phase margin, snappier the response. So, reduce the phase margin.
”Ouch… so why not just 5°?”
“Oh, by the way, that what you see on the scope screen has very little to do with loop response,
because the calculated loop response is implicitly based on small-signal analysis. What you are
seeing here is in reality a large-signal response. So, please recalculate your reactive power
components. The L and C, dummy! Oh, and don’t forget to lower ESL and ESR too”.
You ask: “By how much?” “How large is large, how small is small?”
The steady-state or closed loop response/answer to that seems to always be: “Mmmm….”. (Take
it or leave it).
And just when you think you may have at least got the gist of it all, after struggling through
some good University of Colorado books over a pint or two, you have someone advocating “75°
or 76° phase margin now— said to be consistent with a “Q of 0.5”.
https://www.powerelectronics.com/technologies/power-electronics-
systems/article/21853236/transient-response-counts-when-choosing-phase-margin: “An
analytical derivation of the optimum converter phase margin for critically damped response
shows it is close to 76 degrees, well above the traditional recommendation of 45 degrees. –"
Incidentally, where did Q suddenly come into the picture, by the way? It already seems futile to
ask: what exactly is the relationship between “Q” (quality factor) and phase margin? Not to
mention the relationship between the actual measured overshoot to the “Q”, or to the phase
margin. Is the prevailing, often widely differing industry guidance regarding optimum phase
margin, or Q as you prefer to call it now, just based on simulations? OK, maybe it was based on
lab results after all, but were they under unknown, unrelated, unstated, uncontrolled, perhaps
widely varying conditions? Do they even apply to our case?
183
We can also ask: “how exactly does this optimum phase angle recommendation change in going
from voltage-mode control (VMC) to current mode control (CMC)? How does it depend on the
selected L and COUT of the switcher? Or on its mode of operation: continuous conduction mode
(CCM) and discontinuous conduction mode (DCM)? And what is the overall topology-
dependency? In other words, what happens if we are dealing with a boost or buck-boost,
instead of a buck? Is the ‘recommended’ phase margin always 45°? Or is it 76°? And why?”
And suddenly you also get to hear new terms, barely ever explained, such as “conditional
stability.” “On what condition?” you may ask. Do not assume though!
Well, clearly the experts seem to be saying: the gain of the closed-loop system is called the
“closed-loop gain”. But it is the “open-loop gain” which we are really interested in, for that is what
predicts the stability of a closed-loop system. So, we do a Bode plot measurement—which
incidentally, is a plot of the open-loop gain. Yes—taken on a system while its loop is literally
held closed, not open.”
You ask: “So why is it known as the ‘open-loop gain’? Why is the open-loop gain so critical to a
closed loop system?”
It is endless! How are we ever going to get to the bottom of the seeming quagmire called control
loop theory?
That’s the rather confusing landscape we all encounter as we take our first baby steps into the
mystical world of control loop theory.
The truth is: control loop theory is a challenging subject indeed. Even seasoned hardware
engineers, so far accustomed to dealing mainly with tangible objects on a lab bench, are
expected to grab a paper and pencil and start playing brainy physicist instead! They must hit
the ground (plane) running, literally, leaping effortlessly through hoops of imaginary p-planes,
s-planes and z-planes, and switching seamlessly between time and frequency domains. If that
were not enough, they also need to embrace the fact that the frequencies involved now can not
only be negative but imaginary—whatever that means! All this can cause the last vestiges of
any old-school physical intuition to implode on itself.
It only gets steadily worse—especially when we attempt to apply generic control loop theory
to switchers without fully recognizing the fact that many of the concepts we struggled to absorb
from all the excellent articles and papers on the subject, need serious reevaluation now. One
reason for that is switchers are discrete/digital devices, not continuous as we may have
assumed. That is because they have a discrete “control effort” update interval, related to the
discrete pulses coming in at the rate of the switching frequency. The net result, expressed
intuitively, is that “error” information is not necessarily sampled and communicated
instantaneously to be acted upon and corrected—it is inherently delayed. That has major
consequences. One of which is: we need to lower the bandwidth of the control loop to
typically less than one-fifth the switching frequency. We will try to provide the reasons why,
a bit later. But the truth is that if the bandwidth is set higher, we are in grave danger of over-
reacting, based on wrong/delayed input information. Of course, were it not for all such reasons,
we perhaps would have logically preferred to set the bandwidth as high as possible. In principle,
or at least seemingly so, we would then attain “instant/ideal correction.” But that is out of reach
in reality, though incidentally, it is something close to what we get with hysteretic control, often
called “bang-bang regulators.” More on that later. First, regular closed loop switchers….
184
4.2 Air-conditioners to Closed-loop Switchers?
Traditionally, a control loop system is often explained with reference to a mundane room air-
conditioning system. In that example, the setting of the thermostat is the “set point”, acting as
the input (to the control loop system)—referred to as the “IN” node quite obviously. The output
of this closed-loop system, the result, is called the “OUT” node. Here it is the temperature of the
room. To have OUT approach IN, a thermocouple, or sensor, is obviously present somewhere,
to monitor the room temperature OUT, and thus converge to the desired set point IN. An error
stage looks at the “error”, i.e. the difference between the set point and the output. The system
incorporates negative feedback as a means of correction, so if the room temperature goes above
the set point (room too “hot”), cold air gets pumped into the room to try to reduce the error.
And so on.
We are further also interested in things like: what is the rate at which cold air gets pumped into
the room if the error is say 15°C. What if the error drops to 5°C? Does the rate of cold air being
pumped fall proportionally too—i.e. by a factor of three? Or does it just keep going at the same
rate constantly, and then simply turn off the moment it “thinks” that the error has dropped to
zero? In other words, we are asking if the actual “shape” or “profile” of the correction loop is
important too. And if so, how?
Keep in mind there could be significant delays involved in the sensing and the resultant
response—say for example, based on the non-zero specific heat capacity of the various parts
constituting the sensor and blower, not to mention the thermal capacity of objects in the room.
If too many delays are present, the room temperature, i.e. the output, could easily undershoot,
and go significantly below the set point, before the system even “knows” or realizes it and steps
in to correct it. Similarly, the loop may end up overcorrecting too, before the sensor realizes it.
Thus, we can get (hopefully diminishing) oscillations around the desired set point before OUT
finally converges closer and closer to IN. And in either case, we are interested in minimizing the
under- or over-corrections, and thus come up with the best correction profile too.
As indicated, at some stage, the system will probably push in hot air instead of cold, whereas
the room was in reality, already too hot at that moment. So, that is how we get an overshoot, or
higher discomfort level, though momentarily (hopefully). In addition, in a correctly designed
system, there will be a diminishing error over time, and ultimately the room temperature will
stabilize very close to the set point. Till the next “disturbance”!
We also must understand that in general, there is, and actually should be a small, perhaps even
quantifiable “setting error” at all times. That residual error between OUT and IN is essentially
determined by the “gain profile” of the correction loop. There may be a residual 1 or 2°C error
at the end of it all, unless the loop can detect that error progressively, and try to correct it too,
perhaps over an extended period of time. But the residual error per se, is essentially
mathematical in nature. In other words: if the system has a high “gain”, since gain is simply
Δ(OUT) divided by Δ(IN), and it is not infinite, there will be a certain Δ(OUT) (or settling error)
related to a certain finite, non-zero Δ(IN) (the input “error” or disturbance, which is being
corrected by the high-gain loop). The ratio of the two, is based on a finite, not infinite, gain. So,
for a given Δ(IN), we will be left with a small, but non-zero Δ(OUT). For example, if the desired
room temperature is 25°C, and the temperature outside is 30°C, the “error” being 5°C, the room
may finally settle to 25.2°C. If the outside temperature rises to 35°C, the room may settle down
at 25.4°C instead. That is intuitively visualized as based on the “DC gain” of the correction loop.
Not related to sudden or immediate responses, which are related to the “AC gain”.
185
For example, if the temperature outside suddenly changes by 10°C, the room temperature may
have temporary, but much bigger overshoots or undershoots before it settles down. And that is
a reflection of the “AC gain” (profile) of the correction loop.
Of course the ambient temperature outside does not change “suddenly”. But the disturbance
could mimic that. For example, if someone opens a window or door temporarily, it is a sudden,
applied “disturbance”, and the system will rush in to correct the resulting error. In that case, we
may be interested in knowing what is the speed of correction (the AC response in effect)?
Alternatively, we can suddenly change the setpoint too. So we can ask: what happens if we turn
the knob of the thermostat, say from 25°C to 28°C? How quickly does the room temperature
stabilize now? And what is the “settling time” involved. And so on.
In Figure 4.1 we present a basic control loop embodiment of this, as applied to a simple buck
switcher for starters. With some differences, as explained below!
The IN node commonly used in general control loop theory is now the REFERENCE or REF node
in power conversion. It creates the set point against which the output is compared. The closed
loop gain of the system is now Δ(OUT) divided by Δ(REF). The OUT node in general control
theory is VOUT in power conversion. Despite some fairly common misinterpretation, the “IN” (of
the control loop) is not the input to the power stage, or VIN. It is the input to the control system,
i.e. REF. Also, once the switcher is on, we don’t really wiggle the thermostat/reference around!
So the relevance of the oft-quoted closed-loop gain, as visualized and presented in textbook
control loop theory, hardly has any significance to switchers. In switchers, the primary “inputs”
we are interested in are essentially disturbances, injected at varying points within the closed
loop system. Such as line and load variations. So we need to understand at a more general level,
how disturbances are attenuated (hopefully not amplified), depending on their point of
injection. It is not the same thing as wiggling a thermostat!
The “plant” or “process” in general control theory is now the entire block consisting of three
cascaded stages: the PWM comparator, the switching stage, followed by the LC filter.
However, the “power stage” of the switcher by definition, traditionally includes only the latter
two blocks. So it is the plant less the comparator. The comparator, though part of the plant, is
considered part of the control section of the switcher since it contains no power components,
just signal-level components.
The compensator in general control theory is typically an error amplifier, with all its feedback
components present (usually a bunch of small-signal R’s and C’s). The “sensor” in general
control theory (for example the thermocouple in the usual thermostat control loop theory
example), is typically the voltage divider in power conversion. But more on its actual effect a
little later.
The CONTROL terminal is the same in both representations. But in general control theory it
may actually be called the “control effort”, whereas in power conversion it is usually called the
“control voltage” or “EA OUT” (error amplifier output).
Looking closely at Figure 4.1, we see that the entire process of control hinges on the concept of
negative feedback. So if the output is going up, we try to quickly pull it down! That is why we
see different signs around the summation block in the figure.
Note that in related literature, the summation block is often confusingly represented by a
multiplication sign in the middle of a circle instead of a summation sign. That it is likely done
only to make you give up control loop theory altogether? Well!
186
Figure 4.1: Control loop of a switcher
Note that we will often use gain symbol “H” for parts of the feedback section, and “G” for the
plant. But in literature it is sometimes the other way around, with H being the plant and G the
compensator. Beware! Sometimes G is used for all the blocks, in the plant and the compensator.
Sometimes K is used, as in older Unitrode App Notes. Or “A” is used for the plant and β for the
compensator. And so on. Watch out for a lot of possible confusion as a result of all these
terminological variations.
The most important thing to keep in mind is that the representation of Figure 4.1 assumes we
have a multiplicity of cascaded gain stages. Which implies the gain of each stage can be
quantified as a standalone, and the net gain is then the product of all the individual cascaded
gain blocks. But that may turn out to be a pipe dream. For example, the buck topology is the
only one where we can actually point to a separate “LC post filter” within the plant. In a boost
or buck-boost, even ignoring the relative locations of the L or C, the LC stage is really not
“separable” from the rest—because the node between the L and COUT is connected to the
switch/diode—unlike a buck. So we cannot separate the filter from the switch function. Well,
at least not easily.
As pointed out, in the canonical model from Middlebrook, we can indeed separate the boost and
buck-boost L and C into a separate LC stage, provided we replace the inductor by an “equivalent
inductor” L, equal to
L
L
1 D
2
187
The voltage divider is also not necessarily separable into a separate gain block. In Figure
4.2Errore. L'origine riferimento non è stata trovata., we show how in a typical error
amplifier stage, the lower resistor of the divider goes out of picture. It is just a DC biasing
element, not connected with the AC response, which is what we are interested in. In other
words, the divider doesn’t enter the picture at all from a control loop a perspective. However,
if we use a transconductance op-amp, the divider does enter the picture as separable gain block.
To experienced power engineers, some of the nuances mentioned above may seem to be subtle
as flying quarter-bricks. It may genuinely surprise them to learn that these “details” are still
often missed, or are at least routinely glossed over. But there are some other experts who are
not at all surprised. They knew this was going to happen, and tried to warn budding power
supply engineers since a very long time ago.
The unpolitically correct, or politically incorrect, gist of what some of them have said is: There
are many self-proclaimed “experts” who don’t get it. One of those memorable soothsayers was
the flamboyant Lloyd Dixon of Unitrode (now Texas Instruments). That’s what he said,
verbatim, while presenting his “Control Loop Cookbook” paper at the Unitrode Power Seminar
in Germany. The year was 1996, and this author had the privilege to attend the presentations.
Later that day, Bob Mammano, now considered the “father of the PWM IC industry”, presented
his topic: “Fueling the Megaprocessors - Empowering Dynamic Energy Management”.
A short extract from Mr. Dixon’s no-holds barred written, therefore more “PC” (politically
correct), part of the presentation is reproduced below (see
http://encon.fke.utm.my/nikd/Dc_dc_converter/TI-SEM/slup113.pdf )
That was the title of Lloyd Dixon's presentation that day! What he said was this: "A tremendous
amount of effort has been put into the development of small-signal techniques and linear
models of the various switching power supply topologies. Hundreds, if not thousands of papers
have been written over the years. Your academic “mother”, whoever “he” may be (note the PC
sexual ambiguity), typically focuses on new topologies and/or linear modeling. While not
disparaging any of these efforts – far from it, these contributions have been immense and totally
necessary – there has been a lack of balance and a tendency to try to force behavior that is
uniquely related to switching phenomena into linear equivalent models (with sometimes
uncertain results). Many of the major significant problems with switching power supplies do
not show up in the frequency domain, or in the time domain using averaged models, unless
these problems are anticipated in advance and provided for in the models. Simulation in the
time domain using switched models, although slower, reveals these problems that would have
been hidden.”
Perhaps somewhere along the way, a few relatively inexperienced engineers have gotten a bit
carried away with their new-found prowess manipulating Laplace transforms and so on, and
have therefore ended up downplaying, if not completely disregarding, some of these “subtle”
aspects. Or perhaps their simulations failed to reveal what they perceived to be corner-case”
problems—since they had used small-signal averaged or equivalent linear models for the
switcher to start with, which then turned out to be a self-fulfilling prophecy: You can’t see in
the dark, if you didn’t realize it may be dark and forget to bring along a flashlight.
188
This is AC (change) analysis. Therefore VREF is being ignored below as it is a biasing level only
By definition, transfer function (“H(s)”) is output/input = VCONT/VO
Upper resistor Rf2 is in general Z2 (s). The lower resistor Rf1 is “Rbias” here. However Rf1 is only a DC-biasing
resistor --- it does not appear in the AC analysis, and is therefore not included in the transfer function above.
The first thing we have to keep in mind is that control loop theory can be applied to a switcher
only when its power stage is considered reasonably optimal. That should not become the
stumbling block. Otherwise, we would be just wasting all our valiant efforts in the frequency
domain.
As an example, see Figure 4.3. Here we are showing a sample transient waveform from a
vendor.
189
Figure 4.3: Typical vendor plot showing load transient and inductor current (Intersil)
This vendor has helpfully provided the corresponding inductor current waveform too, which is
quite unusual. It is quite revealing if you stare at it a bit.
The first interesting thing in Figure 4.3 is: the undershoot reverses direction exactly at the point
where the inductor current reaches its final intended value. So that seems to be the real gating
item. Not the control loop. Of course, after that point, being a voltage-mode control system, the
inductor current does overshoot a bit, commensurate with the observed output voltage
overshoot. In a sense, the inductor current has a certain “momentum”. But after a \
However, based merely on the fact that the undershoot stops exactly at the point where the
inductor current reaches its final intended value, the overall response (undershoot) was power-
stage dominated, not control-loop dominated. Which implies that anything more we could do
with the control loop may have fallen flat on its face.
In fact the control loop appears to have reacted long before the maximum undershoot
(minimum voltage) was recorded. See Figure 4.3 again. That is actually typical of all well-
designed control loops—they all “kick in” after about three or more switching cycles.
We see from the same figure that after the control loop kicks in, the control loop was certainly
trying to command a correction, but oddly, nothing significant or dramatic happened—at least
not immediately. Why? One reason for that could be some sort of an architectural limitation.
Indeed, voltage-mode control (VMC) has some inherent deficiencies, based on the “momentum”
of its inductor current. But despite that, VMC is nowadays considered a better choice, especially
with input feedforward included (to be discussed shortly). That is in comparison to current-
mode control, with its now-perceived inherent deficiencies such as subharmonic instability,
noise sensitivity etc. That seems to be receding in the distance.
190
Note that hysteretic controllers seem quite promising in this regard, but they have a variable
switching frequency, especially during transients. So, we may need to validate their system-
level acceptability.
What did we mean about architectural limitations? Well, to optimize any voltage-mode
controlled buck switcher, we need to, at a bare minimum, demand that it can get quite close to
100% duty cycle—to rapidly build up the inductor current as shown in Figure 4.4
Keep in mind that this ~100% duty cycle maximum limit is a very bad idea for a buck-boost or
a boost, which actually depend on the non-zero OFF-time to deliver energy to the output! If we
don’t provide that, the output will continue to sag, whatever the current buildup in the inductor.
And that is also the intuitive explanation of the oft-mentioned “right half plane (RHP) zero”.
The RHP zero doesn’t exist for buck or buck-derived topologies such as the forward converter,
but does enter the picture with the boost and with the buck-boost, including their derivative
topologies such as the flyback.
Another thing we can do is change the clock to respond faster to a transient (pulse-on-demand),
and if the varying clock frequency is acceptable, it should be fine. That’s what happens in a
hysteretic controller.
Unfortunately, from Figure 4.3 we can see that there is no sign, either of on-demand pulses (off-
cycle switch turn-ON), or of maximum duty cycle. Which could partially explain the slow
curving change in output voltage, despite the control loop having kicked in.
Or as indicated, perhaps the inductance was simply too large.
However, it may also be that the power stage was well-designed and the duty cycle may indeed
have been able to “max out” close to 100%, but it simply wasn’t asked to, at least not quickly
enough!
191
Now, that would imply a poorly designed feedback stage, say with poor bandwidth! But we see
enough signs that in this case the control loop did kick in after about three switching cycles.
Which is about right! So, poor bandwidth doesn’t seem to be the culprit here. A bunch of other
things to check out instead.
The type of output response we would like to see is shown in the lower half of Figure 4.5. We
don’t need to see the inductor current to make some observations. Notice the sharp edge in the
output voltage under/overshoot as compared to the waveform in the upper half of the figure.
That seems to indicate a control-loop dominated response, not power-stage limited.
But also keep in mind that to declare victory, this particular sharp-edged output waveform must
correspond to a large-signal event, such as zero to max load. We certainly cannot pass judgment
on the power stage, whether it is “optimal” or “non-optimal”, if we are only doing say, an 80%
to 100% load test. Because then the small-signal/averaged models Mr. Dixon warned us
against, do apply. In effect, we are then no longer dealing with real-world switchers. Just
textbooks.
192
VOUT
LOAD
CURRENT
SW NODE
Bunching of pulses
(variable frequency
during transient)
Figure 4.6: Hysteretic controller response
In Figure 4.6 we show the exact response of a typical hysteretic controller. Notice the on-
demand pulses here, and also the sharp edge as the undershoot starts to head upwards.
So, one thing seems quite certain. To get the control loop to make its presence felt during large
signal events, we need to optimize the power stage first. One of those steps is to lower the
inductance. However, placing the current ripple ratio “r” to be close to 0.4 as advocated by this
author, is definitely in the ball-park, and there is likely no point reducing the inductance any
further. Look for AN-1197 and AN-1246 on the web, originally written by this author in 2001 at
National Semiconductor.
What about the output capacitor?
We now hark back to the solved example in Chapter 19 of Switching Power Supplies A-Z, Second
Edition, to reveal the importance of selecting this vital component too. The results are presented
in Figure 4.7 Basically, there are three main criteria for capacitor selection. One is the ripple,
measured under steady-state max load. It leads to a minimum capacitance requirement of 5.2
μF in this particular example. Another is based on the overshoot which will occur if we just
suddenly disconnect the load. This leads to a minimum capacitance requirement of 22 μF.
193
And third, we have a certain control loop assumption, which says the output cap must be capable
of providing all the energy for at least three cycles, in case of a large load step. Because during
these three cycles, in effect the inductor is not capable of providing most of the energy
requirement, as its current slews up in accordance with Figure 4.4 (discussed in more detail
later). This leads to a minimum capacitance requirement of 30 μF in our example. We picked
33 μF as a potential final value. We should however consider tempcos (temperature
coefficients) again, and voltage coefficients too.
Note that if in the second calculation, instead of a 2.2 μH inductor, we had used a 4.7 μH
inductor, it would have required a minimum capacitance of almost 50 μF, which would have
overshadowed the control loop dictated minimum capacitance of 30 μF. So, we have to be very
careful not to select a larger inductance than recommended. But r = 0.4 should work fine usually.
On the other hand, if the control loop design is sluggish, requiring not 3, but say 6 switching
cycles to start acting, we would need to correspondingly increase the minimum capacitance
from 30 to 60 μF. That adds to the cost too.
So, hand-in-hand with power stage optimization, we need to optimize the control loop too for
best results.
And finally, besides juggling a few things around before deciding the optimal power component
selection, let’s not forget the basic architecture of the controller either.
Referring to Lloyd Dixon’s presentation again, he says: “The open-loop gain, T, is defined as the
total gain around the entire feedback loop (whether the loop is actually open, for purpose of
measurement, or closed, in normal operation).” So in our terminology, T = GH, where G and H
themselves may be the product of cascaded stages as we can see from Figure 4.1
Similarly, Dixon says: “Closed-loop gain defines the output vs. control input relationship, with the
loop closed”.
Actually Mr. Dixon is calling the reference as the “control” node here. Which is a bit misleading.
Besides that, the reference in his case is placed between the voltage divider and the error
amplifier. Which actually need not be the case always, even assuming the divider can be taken
out as a separated gain stage, which in reality may not be so either, as mentioned previously.
Which is why it is very important to truly understand how a disturbance gets attenuated on
account of the closed loop control system, compared to the case of no feedback (open control
loop). And this also depends on the point of injection of the disturbance, in this case the “wiggle”
in the reference (which as mentioned is actually of no real significance in a switcher either!).
Just to resolve the widespread confusion, let’s see the form of the open-loop gain function and
get comfortable with our understanding.
Refer to Figure 4.8, where we compare what happens in two cases, depending on the location
of the reference wiggle, whose effect on the output is what closed loop gain is all about.
194
5.2 μF #1
22 μF #2
30 μF #3
Note: If Inductance was 3.3uH, not 2.2uH we would get 33 uF not 22uF
for condition 2 above. It will therefore start dominating the selection!
Ensure inductance is NOT excessive
195
RESULT
y
ySP
“HS”
RESULT
y
ySP
“HCHS”
Figure 4.8: Point of injection of disturbance and corresponding closed loop gain
In the first (top) case we first realize that any change in the output, “y”, propagates clockwise
in the close-loop system. So starting from the output, we retrace its path backwards through the
plant G and the compensator HC (anticlockwise), and the signal at the output of the summation
block must be y/(G×HC). Now, going clockwise from the output rail instead, y becomes y×H S
after passing through the sensor. After the summation block it is therefore y SP – yHS. But this
must equal y/(G×HC). Equating the two, we get the expression evaluated within the figure.
Similarly, we can go clockwise and also anticlockwise with the reference re-positioned as
shown in the lower half of Figure 4.8, and we get the expression evaluated within the figure.
We see that in both cases of the figure, we end up with a form
y 1 T
Open Loop Gain = =
ysp H x 1 T
where H x is the net gain of all the gain blocks between y and y sp in the forward (clockwise) direction.
196
Note that T is simply the net gain of all the blocks involved, be they considered part of the plant
(G) or of the feedback network (H). And that leads us to a very general derivation of the closed
loop gain of several cascaded stages, where we can just use one symbol for all of them, say G as
in Figure 4.9
1/(1+T) is the “correction factor” which tells us that the effect of closing the loop reduces
(hopefully) the effect of the disturbance on the output, by the factor 1/(1+T).
Figure 4.9: General form of closed loop gain for arbitrary point of injection of disturbance
b) At high frequencies, if T equals -1, the denominator will “explode”. Which means that we
will in effect have sustained oscillations, because there are always limiting parasitics
present, which will not allow the output to really rise to “infinity”.
Still, we need to understand how T can equal -1. Very simply that means, in terms of
magnitudes and phase expressed in polar notation, i.e. in the form of r ∠ θ, we can write
T = 1∠-180°. That is a magnitude of 1 and opposite phase.
So we have full-blown instability if the loop gain T equals 1, and the corresponding phase
is -180°. Why is that a problem? Because combined with the intrinsic -180° associated
with negative feedback (the signs around the summation block where the reference is
introduced in Figure 4.1, or the fact that the output is fed to the inverting pin of the op-
amp in Figure 4.2), we get a total phase lag of -360°. This means: “in-phase”. The
disturbance is reinforcing itself.
We thus arrive at the criterion for instability of a closed-loop system: It all depends on T, which
as we are now realizing is better referred to as the “loop gain”, not really “open-loop gain”,
which is rather misleading. So at the particular frequency at which T = -1, the disturbance has
in effect gone around the closed loop and returned to the point of injection with exactly the
same magnitude and phase it started off with. So, it is going to reinforce and sustain itself!
198
The frequency at which ||T|| = 1 (same as 0dB axis) is called the crossover frequency. If at the
crossover frequency, the phase of T is exactly -180°, then the system will be unstable!
Note that in the 1996 Unitrode presentation, Mr. Dixon went to great pains to explain why there
is a possibility of oscillation only at the crossover frequency. Why is it that the signal can actually
return in phase with a gain far greater than 1, and the system still be considered stable? Mr.
Dixon confessed to spending “sleepless nights” thinking about this, and ultimately explained it
as a vector formation that just can’t “close”…and thus can’t exist, at least not for long.
But it does throw up the possibility of “conditional stability” which people in the industry seem
to have widely differing views on. Because one problem is, if during a sudden large load
transient, the inductor needs time to build up current, or the error amplifier “rails”, then in
effect the gain collapses, and it could at some point reach the self-sustaining condition
expressed as T = 1∠-180°.
This author will at a later stage describe how in fact this “conditional stability”, can actually be
a major contributor to the “ringing” we see on the output during load transients, and thus
should be carefully weeded out as far as possible.
Lloyd Dixon also cautioned quite a bit against conditional stability, but more in relation to the
gain collapsing for various reasons, and the resulting propensity for sustained oscillations. He
didn’t relate it to any improved transient response, which we will do later in this book.
In an effort to create a safety margin from instability, the terms “phase margin” and “gain
margin” were coined. See Figure 4.11. This is called the Bode plot and it tells us everything
about “T”, the loop gain (previously called open loop gain). That is all we need to know for
ensuring stability (Nyquist’s criterion, as Mr. Dixon pointed out, is useful, but truly necessary
only in cases where there are multiple fCROSS).
The limitations of Bode plots are: they really tell us nothing about what will happen in the time
domain—in terms of amplitude or frequency of overshoots or undershoots as a result of any
line or load transients, or even reference-voltage wiggles. That is why the “connection” to any
optimum phase margin remains nebulous.
In Figure 4.11, we also see what conditional stability can look like, but it is not clear from the
Bode plot what all are the unintended consequences of this—whether it should be subdued in
some way, or if it truly represents a “rugged” system as Ridley opines here:
http://www.ridleyengineering.com/loop-stability-requirements.html?showall=&start=2
Gain and phase margin are inter-related based on the fact that we typically aim for the loop gain
T to drop at the rate of “-1”. That is a slope of -20 dB/decade. Note that gain expressed in decibels
(dB) is GaindB = 20 log ||Gain||. So this means the gain is falling at the rate of 10× (i.e. 20 dB) for
a 10× (i.e. decade) shift in frequency. This simply means we have set the loop gain to be inversely
proportional to the frequency past some break-point (in this case the break-point is close to 0
Hz). That is the most common and easily-handled profile for T, because it corresponds to a “first
order filter” (involving only one reactive component combined with a resistance). Such filters
can produce only 90° of phase shift, so that leaves us with a comfortable (stable) phase margin
of around 180-90 = 90°. Though with some reactive parasitics added to it, we may end up with
a lower phase margin. Or we may deliberately try to lower the phase margin, say by placing a
pole at a frequency close to the crossover frequency, and so on. But a second-order filter profile
for T would have produced a 180° of phase shift right off the bat, rendering it unusable, because
the phase margin would then be zero! So a “-1’ slope is what we aim for.
199
Gain-Phase (Bode) Plot
{
0dB
Gain log frequency
Margin
Conditional
stability
Region
Phase angle of How much the phase
Loop Gain (T), in lag is less than -180
degrees degrees threshold
when gain is 1.
-180
Phase Margin {
log frequency
With an inherent 180 degrees phase lag on account of negative feedback, if the the total phase lag
reached 360 degrees, with a gain of unity, a sustained oscillation becomes possible
Figure 4.11: Gain and phase margins, along with conditional stability
200
In Figure 4.12, we show a typical table of the relationship between phase and gain margin,
based on the “-1” slope profile we desire. However, some reactive parasitics could cause the
phase to veer upwards as shown in Figure 4.13, and so there may be no way, practically
speaking, to define a gain margin. In that case, ensuring phase margin should usually suffice.
Magnitude of
Loop Gain (T),
in dB
0dB
log frequency
-180
Phase Margin {
log frequency
We typically pick a high DC gain for T and then roll it off at high frequencies to avoid phase-shift
based reinforcement of disturbances. That defines the AC response. But it is reassuring to
confirm how a high DC gain helps in bringing the DC rail close to its intended set point value
(reference).
In Figure 4.14, we show how this works, putting some numbers to the test to feel comfortable.
It reveals that a certain settling error is inevitable, since DC gain is practically never infinite.
201
PLANT VIN = 10V
(Process)
PWM
Comparator
CONTROL VOUT
5.1V 10 5.1V
0
DC gain = VIN/VRAMP = 1
5.1V ∑ -
0.1V +
DC gain = 51
Loop gain ≡ GH = 1 × 51 = 51
REF Steady state error = 0.1 V
FEEDBACK VREF = 5V
PLANT
(Process)
VIN = 10V
PWM
Comparator
CONTROL VOUT
5.01V 10 5.01V
0
DC gain = VIN/VRAMP = 1
5.01V
∑ -
0.01V +
DC gain = 501
Loop gain ≡ GH = 1 × 501 = 501
REF Steady state error = 0.01 V
FEEDBACK VREF = 5V
This leads to the AC and DC analysis shown in Figure 4.15, which serves to emphasize what we
are really trying to eventually do. It is good to keep our key goal in mind at all times, lest we
lose our way.
202
Figure 4.15: AC and DC analysis of a transient waveform and ultimate goal
There is a technique to improve AC response, by trading off some DC gain in the process. Though
that leads to a deterioration in the settling accuracy, as we see from Figure 4.6, it helps restrict
the transient response to within a certain acceptable window.
This technique essentially allows the output rail to collapse a little bit as the load is increased.
This can be done by introducing a small resistance after the point of regulation and between
the load. That would be called passive voltage positioning. Since this is bound to be slightly
dissipative, the modern technique actually varies the set point as a function of load. It is called
active voltage positioning.
Either way, this positions the output voltage at the lower end of the acceptable window, so now
if we suddenly remove the load, the output rail will tend to fly up as expected. But since it was
positioned further down to start with, it now has a larger available overshoot (excursion) before
it exceeds the upper threshold of the allowed window.
Some have expressed the view that this allows the output capacitance to be reduced somewhat.
See www.linear.com/docs/5600. Indeed, but only if the control loop is the dominant criterion
for selection of the output capacitance, as described in Figure 4.7.
203
Figure 4.16: Voltage positioning
Returning to Figure 4.10, we featured a simple numerical example using a regular buck
converter to show how a high DC gain helps reduce the effect of input variations on the output.
Now we want to extend the same argument to an AC-DC power supply and show how the
crossover frequency attenuates the low-frequency input voltage ripple from appearing on the
output.
Note that here, “input” once again refers literally to the input rail, not the reference. And
admittedly, it is better to call that the “line” instead, to avoid confusion.
Let us take the case of an AC-DC power supply with a certain input ripple at 100Hz (full-wave
rectified input of 50 Hz). Assuming it is a forward converter with buck-like characteristics, and
its duty cycle is 30%, the input-to-output transfer function will provide a dc attenuation of
|20log(D)| = 10.5 dB, because D is the factor that connects the input rail to the output rail, as
in Figure 4.10.
204
But this may receive a further attenuation due to the turns ratio, which may be N PRI:NSEC equal
to 20:1. That gives us 20×log (20) = 26 dB. So we have a net attenuation, without feedback, of
10.5 + 26 = 36.5 dB. In terms of actual factors, this is equal to an attenuation of
Gain _ attenuation 10dB/20 1036.5/20 66.8
This means that if we have an input ripple of 10V, the output would have seen a corresponding
ripple component of 10/66.8 = 150 mV.
But now let’s introduce closed-loop correction. Suppose the entire loop gain (T) is such that it
falls at roughly -1 slope, and crosses over at 50 kHz. We ask: what is the loop gain at 100 Hz?
—the frequency of our interest here. Since a -1 slope simply indicates inverse proportionality,
Loop_ gain100Hz fCROSS
(since -1 slope implies inverse proportionality)
Loop_ gainfcross 100Hz
50000
Loop_ gain100Hz 500 (since Loop_ gainfcross 1by definition)
100
Expressed in dB, this is
20 log Loop _ gain100Hz 20 log(500) 54 dB
So since the correction factor is 1/(1+T) ≈ 1/T, this is equivalent to an additional attenuation
of 54 dB. So now the net attenuation is 54 + 36.5 = 90.5 dB. In terms of factors
Gain _ attenuation 10dB/ 20 1090.5/ 20 33.5k
This means that if we have an input ripple of 10V, the output will see a corresponding ripple
component of 10V/33.5k = 0.3mV.
This is a major improvement over the 150mV without feedback.
Of course, to get the actual output ripple, we have to add the contribution from the output filter
stage etc. This is just the additional low frequency modulation that will be superimposed on that
high-frequency ripple.
The PWM comparator is a key gain block of the closed loop system, as shown in Figure 4.1. Its
gain (transfer function) has an input which is the control voltage, and an output which is the
duty cycle.
The PWM comparator basically superimposes the control voltage against a ramp, and picks a
duty cycle based on the intersection, as shown in Figure 4.17. Since the control voltage is the
“in”, and D is the “out” for this gain block, we can see from the figure that the gain is simply
1/VRAMP. Smaller the ramp, higher the gain. Also, this gain is not a frequency-dependent block.
It applies to all frequencies extending up to the switching frequency and beyond. There is no
“associated” phase shift either. This is just “DC”.
Coming to the switch,
VO D VIN (buck)
205
Therefore, differentiating
dV0
VIN
dD
So, in very simple terms, the required transfer function of the intermediate “duty-cycle-to-
output stage” (i.e. the switch) is equal to V IN for a buck. And it is not frequency-dependent either.
Just a DC block.
All the frequency dependent response of the plant comes from its LC post filter.
Finally, the control-to-output (plant) transfer function is the product of the three (cascaded)
transfer functions, i.e. it becomes, using s = jω, and j = √(-1):
1
1 LC
G(s) VIN (buck: plant transfer function)
VRAMP 2
RC
s s 1 1
LC
The LC post filter (third term) above will be discussed in more detail later. Here L is the buck
inductor, C the output capacitor, and R the load resistor across the output terminals of the buck.
This is an approximation so far, because we are ignoring the ESR (equivalent series resistance)
of the output capacitor, and the DCR (DC resistance) of the inductor. Alternatively, this
simplified plant transfer function can be written out as:
1 1
G(s) VIN (buck: plant transfer function)
VRAMP 2
s 1 s
1
0 Q 0
where 0=1/(LC) is the resonant (break) frequency of the LC post filter, and 0Q=R/L. Or
equivalently, Q = R(C/L).
The gain of this PWM block is “out”/”in” = D/V CONT = 1/V RAMP
Note that gain of any block need not be of the form voltage/voltage. Here, for
example, it is duty cycle/control voltage.
206
4.12 Input Line Voltage Feedforward
One of the oft-touted historical advantages of CMC, is inherent line rejection. Let us look at that
a bit.
Figure 4.17 implies that the PWM ramp is created artificially from the fixed internal clock of the
switcher. That is voltage mode control (VMC) of course. In current mode control (CMC), the
PWM ramp is an appropriately amplified version of the switch/inductor current.
Though the line feedforward technique described in Figure 4.18 is applicable only to VMC, the
original inspiration behind the idea does come from current mode control— in which the PWM
ramp, generated from the inductor current, automatically increases if the line voltage increases.
That partly explains why current mode control seemed to respond so much “faster” to line
disturbances than traditional voltage mode control at the time.
207
However, once Figure 4.18 has been implemented, VMC has effectively imbibed the key
advantage of CMC. One question remains: how good is the “built-in” automatic line feedforward
of CMC, compared to VMC with line feedforward? It turns out the latter is better. Because in a
buck topology, the slope of the inductor current up-ramp is equal to (V IN-VO)/L. So if we double
the input voltage, we do not end up doubling the slope of the inductor current or the PWM ramp
as desired. That means the duty cycle does not halve exactly, as we want it to, based on D =
VOUT/VIN. However, in the case of VMC with line feedfoward, it does exactly that as explained in
Figure 4.18.
In other words, voltage mode control with proportional line feedforward control, though
inspired by current-mode control, provides better line rejection than current mode control (for
a buck).
For a boost topology, using its duty cycle equation, we can similarly derive the gain of the switch
stage.
VIN
VO
1D
dVO VIN
dD 1 D 2
G(s)
1
VIN
1 1s L
LC R
(boost: plant transfer function)
2
VRAMP 1 D s2 s 1
RC
1
LC
where L = L/(1-D)2as discussed previously. It is the inductor in the “equivalent post-LC filter” of
the canonical model. Also note that C remains unchanged. It is just COUT.
Alternatively, the above transfer function can be written as
s
1
1 VIN RHP
G(s) (boost: plant transfer function)
VRAMP 1 D 2 2
s s
1
0 0Q
where 0=1/(LC), and 0Q=R/L.
We have included a surprise term in the numerator, the RHP zero, which can be shown to be
present in both the boost and buck-boost, after detailed modeling. Its location is
2
R 1 D
fRHP (boost)
2L
208
Similarly, for a buck-boost we get:
VIN D
VO
1 D
dVO VIN
dD 1 D 2
(Yes, it is an interesting coincidence --- the slope of 1/(1-D) calculated for the boost, is the
same as the slope of D/(1-D)) calculated for the buck-boost!). So the control-to-output transfer
function is
G(s)
1
VIN
1
LC R
1 s LD
(buck-boost: plant transfer function)
VRAMP 1 D 2
s2 s 1 RC 1
LC
Some of the rather unexpected situations described above, and others we will encounter, can
be equally unexpectedly visualized more elegantly, in terms of the “dreaded” Laplace transform
technique. We need to feel comfortable with it, and for that reason we will discuss it a bit here.
We will also take this opportunity to restate, summarize or emphasize some of our key “lessons
learned”.
The best way to quell our fear of the Laplace transform is to understand that we are simply
moving to an alternative mathematical domain to simplify computation. We have been doing
the same thing for years using log math. See Figure 4.19 and Figure 4.20. Complex
multiplications or divisions of very large numbers get reduced to easy addition and subtraction
instead. Of course, we rely on tables previously created, to go in and out of this logarithmic
plane. So in a sense, the spade work was already done once and for all, by creating the log and
antilog tables. Because, to return to the normal, non-logarithmic plane, we have to use antilog
or inverse-log tables.
209
Figure 4.19: Using logarithms to simplify multiplication of large number
Linear Numbers
Difficult
5
INPUT 100 100000 RESULT
OUTPUT
EQUATION 3 4.1×10 32
SIGNAL
LOGARITHM
Antilog
Logarithms (Ratios)
See Figure 4.21 now, for the Laplace transform technique. This figure does not intend to show
a closed-loop control system. Think of it for example, as just a simple filter stage, consisting of
various capacitors, resistors and inductors. We apply an arbitrary input signal or impulse, and
we are interested in seeing what happens at the output of this network. We discover that the
differential equations to solve this problem (in the normal “time domain”) become very
complicated.
It also turns out that using the alternative “frequency domain” or “s-plane”, i.e. the Laplace
transform, the math becomes simpler. But once again we rely on a bunch of readily available
tables, with the help of which we can move in and out of the new mathematical computation
domain.
210
Alternative route
Return path
map
map
Figure 4.21: The Laplace transform technique (frequency domain analysis)
What exactly are we achieving by the Laplace transform? Essentially, we are breaking up an
arbitrary non-repetitive, time-varying signal or impulse (the “disturbance”), into a continuous
spectrum of both positive and negative frequency components (i.e. in the frequency domain).
This is akin to the well-known Fourier analysis technique used for decomposing repetitive
waveforms into discrete (positive) harmonics. Note that decomposition approaches are easier
for the same reason that we routinely break up “force” in classical mechanics, into its
“independent” x, y and z components, do the math for each “independent” axis separately,
compute the x, y and z components of the acceleration, and finally sum them up vectorially to
give us the final acceleration vector: i.e. its magnitude and direction. That is what we are doing
in the Laplace transform method too, quite similar to what we did in high-school with the
Fourier series, except that the decomposed frequencies are now a continuous spectrum. We
will come to this in a little more detail in the next chapter.
Summarizing, think of Figure 4.21 as a simple 2-port network, say a combination of several
resistors and capacitors, with an input time-varying excitation (voltage or current), and we are
trying to deduce the output waveform. Usually, we will need to set up complicated differential
equations to solve the problem. However, the Laplace transform method allows us to take the
Laplace transform of both the circuit and the excitation. As a result, this becomes a simple
algebraic problem where we can sum over (integrate in our case) the result of the various
frequency components. Finally, we take the inverse Laplace transform to map the result back
into the time domain. We thus see the desired output voltage (or current). As mentioned, the
reason we get away with this “simplification” is that all the drudgery has already been done
beforehand in the form of comprehensive lookup tables for Laplace and inverse Laplace
transforms.
211
4.16 Understanding Delays
With the Laplace transform technique, we can show that a certain time delay is equivalent to a
“phase lag” in the frequency domain, one which is proportional to the frequency of the
component.
As previously mentioned, in switching power supplies too, data is not sampled and acted upon
continuously. In other words, there are inherent delays on account of the stream of discrete
pulses. That also implies there will be increasing phase lag as we approach frequencies close to
the switching frequency. To avoid meeting our criterion of instability (T = -1) prematurely, we
must start by fixing the crossover frequency to less than one-fifth the switching frequency.
That makes sense because the term “phase” (angle) has no relevance unless we are talking in
terms of a specific frequency and its associated time period (of repetition)—which we are doing
in this case, through our frequency decomposition technique via the Laplace transform.
The “compensator” (feedback network) of the control loop can also introduce additional delays,
with corresponding frequency-dependent phase shifts, which add to the inherent delays
present in the response of the plant to the disturbance. These compensator delays are easier to
understand, because the feedback circuit typically involves several resistors and capacitors,
with a bunch of interrelated RC time constants. In particular, the capacitors present may also
need a finite time to charge or discharge to their new average values through all their
accompanying resistors. And that incidentally leads to the compensator’s “poles” or “zeros”,
depending on how the elements are arranged within the high-gain error amplifier of the
compensator. We will talk of Type 2 and Type 3 analog compensators in more detail soon.
We have seen that mathematically, if T = -1, we get full-blown instability. Intuitively, we can
visualize instability as a situation where the system is trying to respond to a completely
outdated/ delayed command without realizing it (reading “up” instead of “down” for example),
and thus continues to head in the opposite (wrong) direction every time. The delay is response
now is exactly one half-cycle, the word cycle referring to the decomposed component’s culprit
frequency. This is full-blown instability, even though the system itself does not literally explode.
212
And that is why, as mentioned earlier, if we suddenly go from 0A to 5A load in a switcher for
example, the initial dip in the output voltage may have little to do with the control loop
characteristics. Indeed, the control loop can always make things worse, but even if it is
considered optimized here, the output response may eventually be determined only by the
output bulk capacitance vis-à-vis the inductance, since the output capacitor needs to supply the
entire additional energy demanded till the current in the inductor can ramp up and take over.
And if the capacitance happens to be inadequate to start with, we need to refer back to Figure
4.7
Similarly, as also explained in Figure 4.7, if we suddenly go from 5A to 0A (unloading), we may
find to our horror that though we saw the output voltage jump up and respond by even halting
switching action completely, the output rail continued to rise, almost out of our control for a
short while. The reason for that is the inductor stubbornly pushes out all the stored energy
related to its initial current setting, into the output capacitors where it can be stored indefinitely
as electrostatic energy as required, since the attached load isn’t demanding energy anymore.
Perhaps that reminds us of the hazy outlines of our forgotten Physics 101 course: energy can be
converted or dissipated as heat (in resistors), but never wished away. That is why we need catch
diodes (and output capacitors) in any switcher in the first place. To freewheel the current
associated with the stored magnetic energy. Dispense with the diode, and we get a worthy spark
ignition system instead of a switcher—lots of heat and light, but no useful power.
But there are still some remaining “subtleties” with regards to non-buck topologies, as shown
in Figure 4.4. For example, during a line (input) disturbance, as opposed to a load transient,
things become quite different for the boost and the buck-boost topologies, as compared to the
simple scenario described in the figure for a buck topology on the left-hand side. The reason is,
in a buck, the average inductor current equals the load current (in steady state), and is therefore
constant during a line disturbance. There is no delay attributable to any inductor
“reinitialization” problem, except of course for a load transient.
However, in a boost or buck-boost, the average inductor current is a function of the duty cycle,
unlike a buck, and thus needs to move to a new average value if we vary the input voltage, even
if the load current is held constant! So now, the inductor reinitialization issue returns to haunt
us, along with the other inherent delays present in the control loop—even during a supposedly
“pure line disturbance”! See the right-hand side of the figure.
This goes to show that not all “disturbances” are alike, nor all topologies, and we must be
cognizant of quite a few such unexpected “subtleties” when we try to move basic control theory
concepts over to switchers.
We remind ourselves once again that to prevent smaller errors of perception from snowballing
into masses of confusion, we must be very clear about the exact meaning of all the terms in
common use.
As mentioned, a prime example of that is the concept of “closed loop gain”. And the other side
of the very same silver coin: “(open) loop gain” or “T”. Many power supply engineers continue
to think that open loop gain is some sort of amplification factor that gets applied to a vague,
unspecified “disturbance” when the feedback loop is literally “open”: i.e. broken or non-existent.
Then, as a corollary, they assume that in contrast, closed loop gain must be what we measure
when the feedback loop is actually present!
213
A sideshow of this confusion is that a hands-on power supply engineer may wonder why, when
he or she runs Bode plots using a standard bench network analyzer, the machine claims that it
is measuring the “open loop gain”. “Why isn’t it giving us the closed loop gain, considering the
fact that the loop is in reality closed?” And so on. One wrong premise leads to many wrong
conclusions. Some engineers wisely use the term “loop gain” instead of open loop gain, as we
too eventually finally did on previous pages. Others prefer to call “T” the “round transfer
function”. It can get a bit confusing.
Many switcher engineers/authors did realize early on that closed loop gain was V OUT/VREF, not
VOUT/VIN. But then, instead of giving examples showing line/load disturbances, they
inadvertently propagated the fallacy further by documenting the overshoots and ringing when
the reference voltage is ramped up suddenly from 0V. For example, we will often see in related
literature the case of a “1/s” “step disturbance” applied to the system, where s = jω as usual.
But in this case, the step is the shape of the reference voltage. You may wonder why it is relevant.
We need to remember that:
a) Every power converter starts up initially with its reference rising up to its set value, so that
hardly qualifies as a “disturbance” of interest to us.
b) Besides, the reference voltage, and the output voltage, rarely come up “instantly”, since in a
practical case both are usually brought up gradually under the influence of a closed-loop soft-
start circuit. The reference is typically slow to rise, as it comes via a 0.1 μF ceramic decoupling
capacitor placed on the current-limited REF pin, which is charged up slowly.
c) Even if we assume VREF did come up abruptly, the output itself would take a very long time
after that, relatively speaking, to ramp up to its steady value, since it has to first charge the
rather sizeable output bulk capacitance across it, through the intervening (slew-rate limiting)
inductor. So this response scenario has really nothing to do with jerking/wiggling the reference
voltage around, even if that is considered relevant.
Indeed, the way the output comes up with no soft-start implemented, may show some ringing
which qualitatively mimics the ringing observed during line and load transients.
We should also point out that a practical, often overlooked problem to measuring what some
engineers still call “open loop gain”, is that in a modern high-gain switcher, there may be no
easy way to “break” up the loop, i.e. to literally open it, without causing disastrous effects on the
output. Leave aside testing it successfully in that state.
Indeed, that can be done on occasion. As when trying to stabilize relatively low-gain mag-amp
post-regulators, using the oft-mentioned K-factor method.
The venerable K-factor method from Mr. Venable, a subject of many popular articles on
feedback control, implicitly requires knowledge of the gain with no feedback present, i.e. with
the loop broken—as a means of optimizing the feedback loop when it is finally introduced. And
so, even though some engineers insist that the K-factor is all that is ever required for stabilizing
switchers, it is usually impractical in most modern cases.
214
Besides the practical aspects, the K-factor technique also unfortunately trades off gain for phase
margin by reducing gain at low frequencies and increasing it at higher frequencies—quite the
opposite of what we usually try to do, for reasons we will soon discuss. That is why the K-factor
technique is perhaps only well-suited for post-regulators, where a steady almost ripple-free DC
input rail is present to start with— such as mag-amps! But rarely otherwise.
The K-factor method as applied to a Type 3 amplifier attempts to put two coincident zeros a
factor of 1/√K below crossover and two coincident poles a factor of √K above crossover. As we
will see, that does nothing to the peaking of the LC pole, which is not only a potential cause of
conditional stability, but affects the ringing at the output during line/load transients as we will
soon uncover.
But the irony of it all is, perhaps all the effort that the K-factor based “optimization” effort
represented, was ultimately for only a questionable improvement in something called the phase
margin. Questionable, because no one seems to agree fully what is the “optimum phase margin”,
and why.
Let’s recapitulate: stability ultimately depends on the following basic question: What happens
if the disturbance undergoes various delays as it goes around the loop? For example, even in
the simple case of a thermostatic room air-conditioner, a) it may take time for the sensor to feel
the change if a window is thrown open. b) After that, the heater or air-conditioner will need
some time to activate and respond. And so on. As also mentioned earlier, these delays can be
modeled as frequency-dependent phase lags. And so, though we have usually been writing out
the gain functions as simply T, G, H, etc., in reality they should be written as T(s), G(s), H(s), etc.,
indicating their frequency dependence and inherent phase angle too, besides their magnitude.
Similarly, for a switcher, we can now visualize a situation where an additional 180° phase shift
can easily occur for some unspecified “harmonic” (frequency component) of the original
disturbance. It will then reinforce itself after going around the loop. The result is the system
could break up into oscillations.
For rejecting low- to mid-frequencies, it is now obvious that we need to try and maximize the
DC gain. But we also need to deliberately roll the gain off at higher frequencies to avoid
instability. Ultimately, we have to adhere to a simple criterion: We have to ensure that at the
specific frequency where an additional 180° of phase lag occurs, the gain falls below 1 to avoid
oscillations (gain margin).That will ensure any disturbance will get abated as it goes around the
loop. Alternatively, we must ensure that when the signal comes around full-circle with a gain of
1, its phase lag is not enough to reach the ominous level of -180°.
215
4.22 Typical analog compensation exercise
“Pole-at-zero”
or “Pole-at-origin” or
“dominant pole”
-20 + 30 = 10
-40 + 40 = 0
20 + 50 = 70
20 + 10 = 30
Examples of GdB + HdB = TdB
70dB
fCROSS
fLC
60dB
Gain (dB)
40dB
(linear scale)
20dB G slope 0
Crossover (at fCROSS)
“fp0” (crossover of pole-at-origin)
0dB
Two zeros (placed here)
Figure 4.22: Classic analog control loop design (simplified) and resulting loop gain
In Figure 4.22 we show a typical compensation exercise, but in terms of gain magnitudes only
so far. It breaks up the “almost” straight line of T into its constituent G and H components. Note
that the double LC “pole” of the plant, which is responsible for the “-2” slope of G after the
breakpoint, has been (roughly) canceled out by placing two “zeros” at the exact position of the
LC, by the compensator. So we are left with an (almost) straight line for T, essentially coming
from the compensator’s low-frequency gain profile. Though that is displaced vertically by the
exact amount equal to the DC gain of the plant, as we will see shortly.
Note that the loop gain T is simply the product of the two cascaded gains, G and H, but on a log
plane, things are easier. We can just sum the two curves of GdB and HdB, to give us TdB. In other
words, once we express the gains in decibels, we can just sum them up, rather than multiply
them out, as shown in the figure.
In the figure, we can also see we have set a very high DC gain (via the compensator) —as
recommended for attenuating low- to mid-frequency disturbances. In fact, theoretically, the
gain is infinite at 0 Hz, but in reality it gets limited by the inherent characteristics of the error
amplifier, though that is not shown in the figure for the sake of simplicity, and also because it is
practically impossible to display 0 Hz on a log scale anyway. But we have shown a dotted line
extending to some very low frequency, and that is called the “pole-at-origin” or “pole-at-zero”,
among other monikers. But it has no location that we can really specify or draw out. What we
do know however is, wherever it is located, it causes the gain to fall at “-1” slope thereafter.
216
So its exact location, i.e. how low-frequency it really is, is reflected only by the frequency at
which it intersects the unity gain (0dB) axis. That frequency is what we are calling “fp0” here.
Indeed, we may place two zeros in the compensator well before the H dB curve ever gets to cross
the 0dB axis. But if we draw a dotted line to the 0dB axis, the intersection frequency is fp0 as
shown in Figure 4.22.
“fp0”, the crossover frequency of the pole-at-origin, needs to be set very carefully because it is
the key parameter which ultimately determines the crossover frequency of interest to us: fCROSS
(the crossover of T). Both fp0 and fCROSS are related through the DC gain of the plant, which in
turn is completely responsible for the vertical arrow shifts shown in Figure 4.22. We will now
see what that exact relationship is.
The plant, as we know by now, has three main constituents, and its gain is the product of its
three stages.
1
1 LC
G(s) VIN
VRAMP 2
RC 1LC
s s 1
So its DC gain (the flat portion of G up to the break frequency in Figure 4.22) is simply V IN/VRAMP
(or 20 log of that in decibels). As we can also see from this figure, this is the amount by which
the compensator gain profile gets shifted upwards to create “T”. Since we are dealing with a “-
1” (inverse proportionality) curve for T, it is easy to see that the following relationship tells us
how exactly we must position fp0 from the compensator, to achieve a certain desired f CROSS for
T.
VRAMP
fp0 fCROSS
VIN
So, the loop gain curve in Figure 4.22 “crosses over” at a frequency fCROSS, which implies a gain
of 1 at that frequency (on a log scale it is the zero of the y-axis since log 1 = 0). To stay well away
from any phase lag effects causing total instability on account of the discrete/sampling issues
related to the switching frequency of switchers, fSW, it is customary to set fCROSS to at least less
than fSW/2 (“Nyquist’s sampling criterion”). But in fact, it is far better to set the crossover
somewhere between fSW/10 to fSW/5. Not higher.
Note that the 180° inherent phase lag on account of negative feedback (“negative” though only
at low frequencies as we now realize), is rarely plotted out. It is “understood”. Only the
additional phase shift introduced by the feedback network and the plant combined, is displayed
on a typical Bode plot.
There are also other parasitics that come into the picture, which we have neglected so far. One
is the ESR-zero, coming from the ESR of COUT. Using a Type 3 compensator, we try to kill this
zero (from the plant), by placing a pole at the same exact position. But besides the pole-at-
origin, a Type 3 compensator produces 2 zeros (both of which we have used up already, to
cancel the LC double pole), but also 2 poles, one of which we can use to kill the ESR-zero. See
Figure 4.23. That leaves us with one additional pole to play with, called “fp2” here. Some people
say we should place it at 10×fCROSS, others say that to attenuate the high-frequency ripple
component, we need to place it at fSW/2. Lloyd Dixon suggested placing it at exactly f CROSS. This
was called the “optimized solution” in A to Z Second Edition.
In Figure 4.24Errore. L'origine riferimento non è stata trovata., we have shown the poles
and zeros from a typical analog control loop exercise, extracted from A to Z Second Edition.
217
Note that this is no longer the “asymptotic approximation” used in Figure 4.22. It has all the
curved regions, plotted out using Mathcad.
Condition
for
coincident
zeros
Solid Lines are for fp2 placed at 10 x fCROSS, dotted lines for fp2 placed
60
at fCROSS (called “optimized solution” in A-Z/2e)
180
50
Loop gain 135
Phase margin
18 0
40 arg( G( f ) )
drops from 78.1
(degrees)
90
to 41.9 deg for
Gain dB
180
Compensator arg( H ( f ) ) “optimized
Phase
Gdb( f ) 30 45 solution”
Hdb( f ) 180
arg( T ( f ) )
Tdb( f ) 20
0
Tdb( f ) Plant arg( T ( f ) )
180
10
45
P M( f )
0 180
90
10
135
20
3 4 5 6 180
10 100 110 110 110 110 3 4 5 6
10 100 110 110 110 110
f
Frequency Frequency
f
Rload is max here (0.2 ohms, 5A) PM( fcross) 180
101. 936
218
2
1 4
3
Bode plot Classic Analog Control Loop Design
3.9kHz (fLC,fz1,fz2)
10kHz (fesr,fp1)
7.1kHz (fp0)
Switching frequency
300kHz
60dB PLANT (G) COMPENSATOR (H) LOOP GAIN (T=GH)
219
7.1kHz
-20dB
180 Single zero on account of Single pole placed to cancel
3 the ESR of COUT (fesr) single zero (fp1)
fcross
90 10kHz
plant
Phase feedback
(deg) 0 Crossover
Single pole often placed at
4 fcross, to “optimize” phase frequency/
O
-90 40kHz margin, and increase gain bandwidth
margin (fp2)
open-loop Phase (fcross)
-180 Margin
{
0.01k 0.1k 1k 10k 100k 1M (~ 45 deg)
Note: We set fcross to 50kHz actually, but using Lloyd Dixon’s method, the actual crossover
Frequency (Hz) frequency drops slightly from the targeted value (50 to 40 kHz in this case)
Switching
frequency 300kHz
Figure 4.25: Summary of pole-zero placements for previous figure and example
In Figure 4.25, we summarize the compensation strategy we have been talking about, showing
exactly what happens (in terms of the frequencies involved), for the plant G, compensator H,
and of course “T”.
In Figure 4.25, we present a table of the components of a Type 3 compensator, based on the
transfer function equation in Figure 4.23Errore. L'origine riferimento non è stata trovata.,
showing how all but one component, are involved in more than one pole/zero position. Which
is why it becomes so difficult to change anything in an analog loop. Changing just one component
can have a domino effect on the gain curves, with “unintended” consequences. We will discuss
this in more detail shortly.
220
The Type 1 compensator is of no practical use on its own, but is a key building block of Type 2
and Type 3 compensators. It is an integrator. It is the where the pole-at-origin, fp0, comes from.
The key general function behind it is plotted out in Figure 4.28. It has the form
A 1 1
H(s)
s s s
A 0
This crosses over at ω0 ≡ ωp0 = A. Or equivalently fp0 = A/2π. So, we can adjust it (translate it
upwards or downwards), by changing A.
Note: It is always preferable to write all pole and zero functions in the form (s/ω0)x, to avoid
needless confusion about the DC gain contribution from the functions.
The math behind the op-amp embodiment of this function, the integrator, is shown in Figure
4.29. We get the equation for fp0 as:
1
fp0=
2RC
Note: In a Type 3 compensator (Figure 4.25), this pole-at-origin (integrator function) is created
by R1C1, nor R1C3 as commonly and erroneously assumed. The reason for that is, we are
working under the assumption C1>>C3; otherwise the mathematical solutions to the locations
of the poles and zeros, are extremely intricate, and thus unusable.
221
Simplified Transfer Function plots of Feedback Stages (Compensators)
All have a “pole-at-zero” (integrator)
Also called “pole-at-origin”
Gain (db)
-
+ VCONT
VREF
2 Poles + 2 Zeros Set both at LC
pole
ESR zero
-
+
VCONT
VREF
1 Pole + 1 Zero
Set at LC pole
VO Type 1
Pure integrator only (not
used by itself, but part of
every compensation scheme
and implementation (as
Gain (db)
-
above)
+ VCONT
VREF
222
Gain(dB) At light loads, the load pole shifts to the left, but the DC
Gain rises proportionally, so the intersection with 0dB axis
20 logGO is unchanged
Load Pole is at “fP”
“LOAD POLE”
PLANT Or ESR Zero at “fesr“ (not shown)
TRANSFER “DOMINANT POLE” RHP zero at “fRHP” (not shown) Location of RHP Zero
Or
FUNCTION “OUTPUT POLE” 2
R 1 D
fRHP (Boost)
2L
2
R 1 D
fRHP (Buck-Boost)
Freq (log scale) 2LD
fP
A B GO fP
1 A R 1 1
R R map
BUCK 1
m 0.5 (m D) B Rmap 2 A CO 2 R CO
R L f
1 R R map A R 1 D 1 1
BOOST 2Rmap
2
m 0.5 1 D 3
2 1 D B 2 A CO R CO
R L f
Figure 4.28: The plant in current mode control (suitable for Type 2 compensator)
223
Figure 4.29: The integrator function (inversely proportional to frequency)
1
Cs
R
vi - vo
+
1
vo 1 1 1
Transfer function Cs
vi R RCs s s
p0
1
RC
1
So : fp0=
2RC
224
4.23 Compensating Other Topologies
As mentioned, we can luckily break up the switch and L-C combination of a boost and buck-
boost into a switch followed by a distinct and equivalent cascaded post-filter stage, consisting
of the same C (output capacitor), in series with an “equivalent inductor” of value:
L
Lequivalent L
1 D
2
In effect, that makes the effective inductance a function of the input voltage. Hence the treatment
can get rather complex, since the LC resonant frequency moves to higher and higher
frequencies as we lower the input voltage (higher D). And that is, intuitively, what eventually
contributes to the RHP zero instability mentioned earlier. The conventional solution to the RHP
zero problem is to virtually accept that there is no solution! We just have to roll-off the loop
gain at a much lower frequency than the typically targeted fSW/10 to fSW/5 for a buck. Maybe
we need to go closer to fSW/20, or even lower. Which is also why we can hardly expect excellent
control loop response from, say, a typical power factor correction (boost) stage, or a “cheap and
dirty” flyback (buck-boost).
Finally, we present a summary of the plant functions for VMC and CMC, for easy reference.
Below you can find the buck, boost and buck-boost (all in VMC) respectively, followed by their
CMC counterpart.
s s
1 1 Buck
G(s)
VIN
ESR
VIN
ESR 2
VRAMP s
2
1 s VRAMP s (Plant Gain, G(s) in VMC)
1 1
0 Q 0 0 The double LC pole occurs
ωESR = 1/((ESR)⨯C) ωO here. There can be severe
ESR zero
peaking in the gain plot right
ω0=1/√(LC) LC double pole 20 log here, because of high Quality
(VIN /VRAMP ) factor [Q=R√(C/L)]. This is also
ω0Q=R/L Peaking of LC pole ωESR accompanied by a sudden
CCM 180° phase shift that can lead
Gain
225
s s s s
1 1 1 1 Boost
G(s)
VIN
ESR RHP VIN
ESR RHP
VRAMP 1 D
2
s
2
s VRAMP 1 D
2
s
2 (Plant Gain, G(s) in VMC)
1 1
0 0Q 0 The double LC pole occurs
ωESR = 1/((ESR)⨯C) ωO here. There can be severe
ESR zero
peaking in the gain plot right
ω0=1/√(LC) LC double pole 20 log here, because of high Quality
(VIN /[VRAMP (1-D)2] factor [Q=R√(C/L)]. This is also
ω0Q=R/L Peaking of LC pole ωESR accompanied by a sudden
CCM 180° phase shift that can lead
Gain
ωRHP=R/L Right Half Plane zero ωRHP
to instability or (ringing.
ESR zero: The ESR of a cap has
C is the output cap, with ESR wide tolerance/spread, and
R is the load resistance 20 log can vary with frequency and
L is the inductance of the inductor (frequency) time (aging). But it can be
divided by (1-D)2 This DC gain of plant varies roughly canceled with a pole
with input voltage in RHP zero: Needs to be canceled from the “compensator”. With
conventional voltage mode or moved out to a very high very low-ESR caps, like
control. So, loop response frequency so it will become multilayer ceramics, this zero
changes with input --- line irrelevant (very difficult to cancel moves out to a very high
feedforward feature too out, may need to reduce frequency and then is of
complicated to implement) bandwidth significantly). almost no concern anyway
s s s s
VIN
ESR
1 1
RHP VIN
ESR
1 1
RHP
Buck-Boost
G(s)
2
VRAMP 1 D s
2
s VRAMP 1 D
2
s
2 (Plant Gain, G(s) in VMC)
1 1
0 0Q 0 The double LC pole occurs
ωESR = 1/((ESR)⨯C) ωO here. There can be severe
ESR zero
peaking in the gain plot right
ω0=1/√(LC) LC double pole 20 log here, because of high Quality
(VIN /[VRAMP (1-D)2] factor [Q=R√(C/L)]. This is also
ω0Q=R/L Peaking of LC pole ωESR accompanied by a sudden
CCM
180° phase shift that can lead
Gain
226
s Subharmonic instability pole not shown (is present for CCM and D >50%)
1 Buck
R
G(s) ESR At light
(Plant Gain, G(s) in CMC)
RMAP s loads
1
o Output “load pole”. Single pole,
ωESR = 1/((ESR)⨯C) ωO no peaking. Its location is
ESR zero
inversely proportional to R, so
ω0=1/(RC) Output Load pole 20 log it is proportional to load
ωESR current . DC gain is inversely
(R/RMAP)
proportional to load current.
CCM
So crossover frequency
Gain
C is the output cap, with ESR
(bandwidth) is unchanged as
R is the load resistance
load changes (dashed line)
RMAP is the transresistance ---- the
PWM ramp voltage divided by the ESR zero: The ESR of a cap has
corresponding sensed current 20 log wide tolerance/spread, and
This DC gain of plant does not vary (frequency) can vary with frequency and
with input voltage in current mode time (aging). In CMC however,
control. So, loop response is steady it must be either canceled (by
with respect to input. It is also a pole in the compensator), or
proportional to R, i.e. inversely moved out to a very high
proportional to load current. frequency, so that it becomes
Unfortunately, DC gain does tend to irrelevant
fall at high load currents.
227
Figure 4.36: Buck - Boost in CMC (simplified)
A simple way of collecting a Bode plot on a switcher is shown in Figure 4.37. A current loop and
a passive current probe (snap-on coil) are the basic requirements. A standard HP/Agilent
network analyzer such as the 4396B are required.
And indeed, as the math presented before shows, we do measure T (“open loop gain”) on the
closed loop system, provided we inject the signal at a suitable point, as in Figure 4.38.
Figure 4.37: A simple way of doing a loop gain-phase (Bode) plot on the bench
228
Figure 4.38: Indeed, we measure (open) loop gain on a closed loop setup
One of the big nuisances regarding typical analog compensators is the difficulty of tweaking any
aspect of the gain profile.
We will illustrate this with an actual example shortly. But the baseline for that is the following
solved example.
Example: Using a 300 kHz synchronous buck controller we wish to step-down 15V to 1V. The load
resistor is 0.2 (5A). The PWM ramp is 2.14V as per the datasheet of the part. The selected
inductor is 5 H, and the output capacitor is 330 F, with an ESR of 48 m.
We know that the plant gain at DC for a buck is VIN/VRAMP = 7.009. Therefore, (20 log) of this
gives us 16.9 dB. The LC double pole is at:
229
1 1
fLC 3.918 kHz
2 LC 2 5 106 330 106
Here we want to set the crossover frequency of the open-loop gain at 1/6 th the switching
frequency, i.e. at 50 kHz. Therefore we can solve for the integrator’s fp0 and thereby its “RC”
using
VRAMP 1
fp0 fcross
VIN 2 RC
2.231 105
C1 11.16 nF
2 103
The crossover frequency of the integrator section of the op-amp is
1 105
fp0 7.133 kHz
2 R1C1 2 2.231
The ESR-zero is at
1
fesr 3
10.05 kHz
2 48 10 330 106
The required placement of zeros and poles is
fz1 fz2 3.918kHz (place both zeros at at the LC pole location)
1 1 1 1 1 1
C2 6 12.4 nF
2 R1 fz1 fp1 2 2 10 3.918 10.05
fp0 7.133
R2 R1 2 103 3.641 k
fz2 3.918
1 10 6
C3 88.11 pF
2 R2 fp2 R1fp0 2 3.641 500 2 7.133
230
R1 fz1 2 103 3.918
R3 1.278 k
fp1 fz1 10.05 3.918
We already know C1 is 11.16nF and R1 was selected to be 2 kΩ. So here is a summary of all the
components (with the voltage divider component highlighted, to indicate it is an input):
C1=11.16 nF, C2= 12.4 nF, C3= 88.11 pF, R1= 2 k, R2 = 3.641 k, R3 = 1.278 k.
This baseline corresponds to the central (solid red) gain curve in Figure 4.39.
We first ask: how do we lower fCROSS, only, without changing the basic location of the poles and
zeros. In other words we simply want to translate the red solid curve vertically down. The first
step is to double C1, because R1 and C1 determine fp0 (the crossover of the pole-at-origin “p0”
as per Figure 4.40), and R1 is preferably fixed since it is part of the voltage divider. However,
now looking at the interaction matrix in Figure 4.40 we see that C1 is also part of the second
zero “z2”. And this doubling of C1 will no doubt lower fz2. We can see this step #1, the red
dashed gain curve in Figure 4.39. But that is not what we wanted. So looking again at Figure
4.40, we realize that to get fz2 back to where it was, we need to go through step #2: halve R 2.This
is the blue dashed line in Figure 4.39. Unfortunately, since R2 was also part of p2 as per Figure
4.40, halving R2 has shifted fp2 to a higher frequency. We need to correct that too. This is done
through step #3, where we double C3. This gives us the solid blue line in Figure 4.39, and since
C3 is only part of p2, the domino effect stops right here, luckily.
Similarly, if we want to raise fCROSS, we can go through the three steps #A, #B and #C shown in
Figure 4.39
We can achieve our target of raising or lowering fCROSS without changing the locations of the other
poles/zeros, but with a total of three component changes!
It is not as simple as putting one decade box somewhere in the compensator, and blindly
tweaking the Bode plot.
Now suppose we want to shift both coincident zeros to half their original frequency, perhaps
because we changed the inductor or/and output capacitor to shift the LC pole to half the
frequency. Looking at Figure 4.41 we see that though shifting fz2 seems easy, we are unable to
intuitively change fz1, since we get trapped in a strange circle.
Keep in mind that from the locations of the zeros, there is a constraining relationship which we
may not have explicitly recognized so far.
1
fz1
2 R 1 R 3 C 2
1
fz2
2R 2 C1
So if fz1=fz2, we have
C1 R 1 R 3
C2 R2
This is the constraining relationship inherent in our strategy. Indeed we can confirm by
plugging in the numerical values from the book, that this is true. Further, it needs to be
maintained wherever our LC pole is positioned, as per our compensation strategy.
231
If we do not, our simple strategy will break down, and all bets are off. We may be able to
manually tweak crossover frequency and/or phase margin on the bench by using a decade box
for one of the resistors involved, as is often done, but at best that would be a minor tweak. In
reality as we see below, many components have to be changed simultaneously.
As mentioned, halving fz2 is relatively easy. All we need to do, is to double R2. But since pole
“p2” also depends on R2, to keep it from moving we halve C3. So that is over, because C3 is
involved only in “p2”, not in any other pole or zero location.
Shifting fz1 is however very tricky, and cannot be done intuitively. First, it involves three
components: R1, R3 and C2. We don’t want to be forced to change R1, since that is part of the
voltage divider. However, if we simply double C2, this also affects pole “p1” and to keep that
unchanged, we need to halve R3. But R3 is also involved in the zero “z1”, and so the entire
process seems convoluted. Luckily, since C2 also gets multiplied by C1 in the location of “z1”, we
do manage to move the location of z1, but by a certain weighted amount, based on the value of
R3.
1
fz1
2 R 1 R 3 C 2
In other words, it is hard to predict what the values of the RC’s are for changing fz1. We need to
go back to the basic equations for calculating all the components from scratch (mathematics,
not intuition). In our specific numerical example, we recalculate all the values if we change fLC
(fz1 and fz2) from 3.918 kHz to 3.918/2 = 1.959 kHz. The before and after RC values are:
Before:
C1=11.16 nF, C2= 12.4 nF, C3= 88.11 pF, R1= 2 k, R2 = 3.641 k, R3 = 1.278 k.
After:
C1=11.16 nF, C2= 32.7 nF, C3= 44 pF, R1= 2 k, R2 = 7.28 k, R3 = 484.24.
Our conclusion is, just to shift the two zeros without changing the crossover frequency and the
other poles and zeros of a Type 3 compensator, we need four component values to be changed
every time, and it isn’t straightforward either. We certainly can’t do it “on the fly”. But we can
do that with digital techniques, as we will learn in the next part of this series.
232
#A: C1→ ½ × C1
#B: R2→ 2 × R2
#C: C3→ ½ × C3
Increase DC gain without
shifting other zeros and
60
poles
→A→B→C
50
C1=11.16n R1=2k
C2=12.4n R2=3.641k
Compensator Gain (dB)
40 START HERE
C3=88.11p R3=1.278k
30
Decrease DC gain without
Hdb( f ) 20 shifting other zeros and
poles
10 →1→2→3
0
#2: R2→ ½ × R2
10 #3: C3→ 2 × C3
#1: C1→ 2 × C1
20
3 4 5 6
10 100 110 110 110 110
f
Frequency (Hz)
Refer to worked example on Page 481 of Switching Power Supplies A-Z, second edition (starting point for this exercise)
Changing only DC gain (i.e. crossover frequency):
Three steps required every time (tricky)
C3
VOUT
R2 C1 C2 R3
-
VCONTROL + VREF R1
#2
#1: C1→ 2 × C1
#1 #2: R2→ ½ × R2
#3: C3→ 2 × C3
#3
233
1) R2→2R2 Use these instead:
VRAMP
fp0 fCROSS
VIN
Halving fz2
VIN
(easy) C1
2 VRAMP fCROSS R1
2) C3→C3/2 1 1 1
C2
2 R1 fz1 fp1
fp0
R2 R1
fz2
1
C3
???? 2 R2 fp2 R1 fp0
Trying to halve R1 fz1
????
fz1 (complicated) R3 fp1 fz1
2) R3→ R3/2
1) C2→2C2
Unfortunately, the trouble doesn’t stop with our inability to tweak a Type 3 compensator easily
or intuitively. For if we look at the capacitor values we have calculated for our application, they
are not even close to standard values. Capacitors still come mainly in the E12 series: 10 12 15
18 22 27 33 39 47 56 68 82. The tolerance is +,- 10%. On top of that, unless we are using
C0G capacitors, we have to include the effects of temperature, voltage, aging and so on. Not to
forget that our initial equations were based on an assumption: C 1>>C3.
So the final placement of the poles and zeros, as also the bandwidth (f CROSS) may be quite
different from what we intended.
However just in case we want to derive more exact equations, without any approximations,
here they are, with and without the C1>>C3 approximation. But now we realize that for example,
“p0”, the pole at origin, is actually affected by three components: R1, C1 and C3.
1 1
fp0
2R1(C1 C3) 2 R1C1
1
fp1
2R3C2
1 1 1 1 1
fp2
C C 2 R2 C1 C 3 2 R 2 C 3
2 R2 1 3
C1 C 3
1
fz1
2 (R1 R3)C2
234
1
fz2
2R2C1
Matters get even more complicated and nothing is intuitive anymore. With that we head to
digital control loops next.
{Routine}
Rlower_:=Vref*Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[562.3413m]
a:=Fc^4+Fc^2*Fz1^2+Fc^2*Fz2^2+Fz1^2*Fz2^2
c:=Fc^4+Fc^2*Fp1^2+Fc^2*Fp2^2+Fp1^2*Fp2^2
C3_:= 1/(2*pi*Fz1*Rupper_)
R3_:= 1/(2*pi*Fp2*C3_)
R2_:= sqrt(c/a)*G*Fc*R3_/Fp1
C1_:= 1/(2*pi*Fz2*R2_)
C2_:= 1/(2*pi*Fp1*R2_)
{Output}
Rlower_=[10k]
C3_=[15.9155n]
R3_=[500]
R2_=[583.1803]
C1_=[272.9086n]
C2_=[13.6454n]
V+
C1 100n
V1 15
AC Transfer Characteristic
C1_ 272.90864n R2_ 583.180301
{Inputs}
+
C2_ 13.645432n
Vout:=1 {Voltage Main}
Rupper_ 10k
{Routine}
Rlower_:=Vref Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[562.3413m]
-3.00
a:=Fc^4+Fc^2 Fz1^2+Fc^2 Fz2^2+Fz1^2 Fz2^2
T c:=Fc^4+Fc^2 Fp1^2+Fc^2 Fp2^2+Fp1^2 Fp2^2
C3_:= 1/(2 pi Fz1 Rupper_)
-13.00
R3_:= 1/(2 pi Fp2 C3_)
Gain (dB)
C3_=[15.9155n]
170.00
150.00 R3_=[500]
130.00 R2_=[583.1803]
110.00
90.00
C1_=[272.9086n]
100.0 1.0k 10.0k 100.0k 1.0M C2_=[13.6454n]
Frequency (Hz)
{Routine}
Rlower_:=Vref*Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[562.3413m]
a:=sqrt((Fc^2+Fp^2)*(Fc^2+Fz^2))
c:=(Fc^2+Fz^2)
R2_:= (a/c*Rupper_*Fc*G/Fp)
C2_:= 1/(2*pi*Fp*R2_)
C1_:= 1/(2*pi*Fz*R2_)
{Output}
Rlower_=[10k]
R2_=[7.0137k]
C2_=[1.1346n]
C1_=[22.692n]
V+
C1 100n
V1 15
AC Transfer Characteristic
C1_ 22.692016n R2_ 7.013698k
C2_ 1.134601n
Vout:=1 {Voltage Main}
Rupper_ 10k
Vref 500m
V+
{Routine}
Rlower_:=Vref Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[562.3413m]
a:=sqrt((Fc^2+Fp^2) (Fc^2+Fz^2))
20.00
c:=(Fc^2+Fz^2)
T R2_:= (a/c Rupper_ FcG/Fp)
C2_:= 1/(2 piFpR2_)
0.00
C1_:= 1/(2 piFzR2_)
Gain (dB)
-20.00 {Output}
Rlower_=[10k]
R2_=[7.0137k]
-40.00
156.00
C2_=[1.1346n]
146.00 C1_=[22.692n]
136.00
Phase [deg]
126.00
116.00
106.00
96.00
86.00
100.0 1.0k 10.0k 100.0k 1.0M
Frequency (Hz)
236
{Nico - k factor Type 3}
{Inputs}
Vout:=1 {Voltage Main}
Vref:=0.5 {OPAMP Vref}
Rupper_:= 10k {Rup feedback partition}
Fc:= 15k {beyond the plant resonance}
pm:=70 {Phase Margin }
Gfc:= -20 {Plant gain read at Fcrossover (pos. or neg. in dBs)}
pfc:= -120 {Phase at Fcrossover (negative degrees)}
{Routine}
Rlower_:=Vref*Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[10]
boost:= pm-pfc-90
boost=[100]
K:= (tan((boost/4+45)*pi/180))^2
K=[7.5486]
C2_:= 1/(2*pi*Fc*G*Rupper_)
C1_:= C2_*(K-1)
R2_:= sqrt(K)/(2*pi*Fc*C1_)
R3_:= Rupper_/(K-1)
C3_:= 1/(2*pi*Fc*(sqrt(K))*R3_)
{Outputs}
Rlower_=[10k]
C2_=[106.1033p]
C1_=[694.8315p]
R2_=[41.955k]
R3_=[1.527k]
C3_=[2.529n]
V+
C1 100n
V1 15
AC Transfer Characteristic
C1_ 694.831454p R2_ 41.954982k
2
-
6 pm:=70 {Phase Margin }
VF1 + 3
Gfc:= -20 {Plant gain read at Fcrossover (pos. or neg. in dBs)}
Rlower_ 10k
+
7
{Routine}
Rlower_:=Vref Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[10]
boost:= pm-pfc-90
46.00
T boost=[100]
36.00 K:= (tan((boost/4+45) pi/180))^2
K=[7.5486]
Gain (dB)
26.00
C2_:= 1/(2 piFcGRupper_)
16.00 C1_:= C2_ (K-1)
R2_:= sqrt(K)/(2 piFcC1_)
6.00
R3_:= Rupper_/(K-1)
-4.00 C3_:= 1/(2 piFc(sqrt(K)) R3_)
190.00
165.00 {Outputs}
Phase [deg]
140.00
Rlower_=[10k]
C2_=[106.1033p]
115.00
C1_=[694.8315p]
90.00 R2_=[41.955k]
100.0 1.0k 10.0k 100.0k 1.0M
Frequency (Hz) R3_=[1.527k]
C3_=[2.529n]
{Routine}
Rlower_:=Vref*Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[10]
boost:= pm-pfc-90
boost=[50]
K:= tan((boost/2 + 45)*pi/180)
K=[2.7475]
C2_:= 1/(2*pi*Fc*G*K*Rupper_)
C1_:= C2_*(K^2-1)
R2_:= K/(2*pi*Fc*C1_)
{Output}
Rlower_=[10k]
C2_=[38.6184p]
C1_=[252.898p]
R2_=[115.2704k]
V+
C1 100n
V1 15
AC Transf er Characteristic
C1_ 252.897967p R2_ 115.270364k
C2_ 38.618441p
Vout:=1 {Voltage Main}
Rupper_ 10k
Vref 500m
V+
{Routine}
Rlower_:=Vref Rupper_/(Vout-Vref)
G:= 10^(-Gfc/20)
G=[10]
boost:= pm-pfc-90
boost=[50]
60.00 K:= tan((boost/2 + 45) pi/180)
T
50.00 K=[2.7475]
40.00 C2_:= 1/(2 piFcG K Rupper_)
Gain (dB)
0.00 {Output}
-10.00 Rlower_=[10k]
140.00 C2_=[38.6184p]
130.00 C1_=[252.898p]
Phase [deg]
120.00 R2_=[115.2704k]
110.00
100.00
90.00
100.0 1.0k 10.0k 100.0k 1.0M
Frequency (Hz)
5.1 Introduction
Here it certainly gets a bit mathematical, and I (Sanjaya) apologize for that beforehand. But
really, we have no natural feel for phase angles in particular. That’s why! Can any of us claim to
feel comfortable with something like ejωt? Yet, in some strange way, ejωt expresses a phase
relationship, which unfortunately, makes it not just necessary, but easier that its grislier
alternatives such as solving differential equations. In other words, math is inevitable in
feedback systems, and in fact anything in which phase is key.
However, the good news is, after gaining some mastery, by manipulating some common
functions in the s-plane, a certain confidence starts to emerge. That has been my approach
here— to drive the fear out. I have tried to have fun with math a bit, to get you to the level where
you won’t get shivers at the very sight of the Laplace transform. Because once you are past that,
it becomes remarkably smooth and painless as you’ll see.
Towards the end of this chapter, I have gone on to show that a historical “artifact” called
conditional stability, is not as harmless as it looks. In fact, it may in some cases be the prime
cause for the severe ringing on the output under large-signal events. I have thus introduced a
way to lower that ringing by achieving much better matching of the power stage and the
feedback section. The Q-matching technique published here is perhaps a stunning analogy…
hitherto relatively unnoticed, but at par with my concept of current ripple ratio and also the
now suddenly popular scaling lows, presented at an IEEE seminar by Sanjaya, and then a
webinar by Nicola.
In the previous chapter, we learned that the basic intent of creating a closed loop (i.e.
corrective) system is that by introducing negative feedback we can reduce the effect of any
disturbance on the output, be it a line variation, a load step, or a wiggle in the reference (however
relevant)—by the factor 1/(1+T) for all of them, as compared to their effect on the output
without a closed loop corrective system in place. Here “T” is in effect the cumulative gain of all
the cascaded stages of the loop, consisting of the plant (gain = “G”) and feedback (gain = “H”).
This term “T”, may be referred to in literature as the “open-loop gain”, or “loop gain”, or “round
gain” and so on. But whatever it is called, we need to keep in mind that T = G×H. So, the
constituent gains are simply multiplied together—or added up on a log scale, since log T = log
G + log H. In terms of our shorthand: TdB = GdB+HdB. Note that this presupposes cascaded gain
stages. We also showed that the voltage divider cannot be extricated as a separate gain stage, if
we are using conventional error amplifiers in the compensator. Also, nor can the LC post-filter
stage be separated out, except for a buck. But by introducing something called the “equivalent
inductance”, we do manage to do that, even for a boost and a buck-boost.
In principle, we always try to set DC gain as high as possible. Because at least for DC, or very
low frequencies, there is no associated phase shift to consider, and for that reason the “loop
gain” (overall transfer function, T) has a magnitude, with no imaginary component (i.e. it is a
real number). In that simple case the following approximation makes intuitive sense: 1/(1+T)
≈ 1/T. In other words, a high DC gain is very helpful in rejecting DC or low-frequency (e.g. line
AC frequency of 50/60 Hz) disturbances in particular, by the factor 1/T.
239
But we soon learn we can’t afford to set a high gain for all the frequencies components, because
higher frequencies are almost inevitably accompanied by frequency-dependent phase shifts.
We always need to ask: what is the cumulative impact of all these phase shifts? Does it cause
instability? That condition is defined by the condition T = -1, i.e. a magnitude of unity and a
phase of -180°. We thus defined the phase margin as the difference between the phase lag and
the ominous threshold of -180°.
Note that phase angles of cascaded gain stages always add up arithmetically, as do gains
provided the latter are expressed in logs (db). In other words φ T = φG + φH . In particular, we
need to prevent any frequency component of any disturbance from ever reaching 180° of phase
lag by the time it propagates full circle through the plant and the compensator stages. Because
if the net phase lag ever reaches that value, then combined with the 180° baseline offset always
present on account of negative feedback, we will end up with 180+180 = 360 ≡ 0° phase lag.
Intuitively this means that after going around the loop, the disturbance has returned “in-phase”
for the frequency component under discussion. It can wreak havoc, if the corresponding
magnitude is also the same as the starting value (gain =1 or 0 dB). Note that this gain equality
condition is hard to visualize intuitively, but that is what is implied by T = -1. It is not the same
as acoustic feedback for example, where the signal can come around full circle in phase, and
with increased magnitude, and cause a huge squeal.
In other words, just a phase lag of 180° does not guarantee instability. It might cause severe
ringing on the output, but it cannot sustain itself, and will eventually decay. So at best, phase
angle reinforcement is only bad news. One more condition needs to be met to cause full blown
instability: the magnitude of the frequency component on returning full circle needs to also be
equal to its starting value (“0dB”). If these two conditions for phase and gain, are met
simultaneously, only then the disturbance will become self-sustaining.
To avoid this “doomsday scenario”, we need to incorporate a certain robust, safety or stability
margin (figuratively the “distance from disaster”). We can talk about this margin, either in
terms of “phase margin”, i.e. the phase angle short of the 180° phase lag level when the gain falls
just below unity (at crossover frequency), or in terms of “gain margin”, i.e. the gain below the
unity (0dB) level at the frequency where the phase reaches 180° phase lag (if at all, of course).
However, not forgetting that we are dealing with switchers, not continuous control as in older
analog systems such as room air-conditioners, we also need to keep in mind that we have to
stay well clear of Nyquist’s sampling limit of fSW/2, where fSW is the switching frequency. This
causes an additional frequency-dependent phase lag which can also contribute to system
instability. We also need to recognize that the effect of switching can be felt well below f SW/2.
So we need to stay clear of that by quite a margin! Typically crossing over (0dB) at less than
fSW/5.
Summarizing: In a practical case, we usually attempt to set a very high DC gain to reduce the
effect of disturbances in general. But realizing that frequency-dependent phase shifts will
always occur, we strive to ensure that T “crosses over” (i.e. falls below unity, or 0dB axis on a
log scale) typically between fSW/10 to fSW/5, and the phase lag at that point must be well short
of -180°.
Occasionally, engineers still ask: don’t we try to set current-mode controlled (CMC) systems to
achieve even higher bandwidths? Say closer to fSW/3? Yes indeed. We may try. But what can
happen is this: there is a subharmonic peaking in the gain plot exactly at fSW/2, and it can have
severe consequences. Let us briefly revisit CMC, to get that over with.
240
5.2 Problems with CMC
Sometimes, in current mode control (CMC), we try to go as high as fSW/3 for the crossover
frequency. But keep in mind that a sharp peaking in the gain curve appears at exactly f SW/2, on
account of subharmonic instability if D > 0.5. We can see this peak on a Bode plot if we look
closely and have high-enough resolution. See Figure 5.1.
We discover that if this “parasitic peak” rises upwards and ever intersects the 0dB line, it will
cause the system to go into unrecoverable instability, with no understandable “Bode plot”
thereafter. We will see “alternate pulsing” on the switch mode (one big pulse, followed by a
narrow pulse, repeating indefinitely). The transient response will be as bad as it gets, even
though under steady-state conditions, we may not notice any difference.
Solutions to that are lowering the “Q” of this subharmonic peak—by steps such as adding more
slope compensation, increasing the inductance, or simply lowering the crossover frequency to
even lower values than the supposed max f SW/5 limit of voltage-mode control (VMC)! This will
then increase the “safety margin” requirement of CMC as shown in Figure 5.1.
To lower the Q to a reasonable value of less than 2 (maybe even to 1 or 0.5), we need to ensure a
certain minimum inductance as shown in Figure 5.2. But increasing the inductance causes a
new problem, as shown in Figure 5.2. This is the issue related to the leading edge spike. It can
cause jitter and in severe cases, an inability to deliver full power.
Suppose we increase the inductance to evade this, we could cause premature termination of the
switching pulses. Because in doing so, we are inadvertently raising the pedestal on which the
spike is riding on. And since, less than the required energy will be delivered for that
(prematurely terminated) cycle, in the next cycle the converter will try to compensate by a
larger duty cycle. In this process it gets some unexpected help because after the early
termination of the previous pulse, the inductor current had a longer time to slew down, and
thus the pedestal on which the leading edge spike is riding comes down, usually enough to help
it evade early pulse-limiting in the next cycle.
What we see on the oscilloscope are alternate wide and narrow pulses, which mimic what we
get under subharmonic instability, as mentioned above.
We are surprised, because we had thought that a high inductance should be helping us avoid
subharmonic instability, but here it seems to be aggravating it!
We could of course set a large ‘blanking time’ for current mode control, and/or we can add some
delay to the current limit detect circuit. But we also then run the danger of not being able to
react fast enough to an actual abnormal load condition, especially if the inductor starts to
saturate. The transient response will also worsen, since as mentioned, any delay in sampling is
equivalent to a frequency-dependent phase lag. If we add slope compensation indiscriminately,
we are in effect converting CMC to VMC, and the LC peaking will once again start showing up in
the Bode plot!
And so on. We go around in circles. No wonder, VMC with input feedforward is becoming the
preferred choice nowadays. We therefore ignore CMC from this point on.
241
Gain Plot (CMC), for D > 0.5
Magnitude of loop
gain (T), in dB
}
0dB log frequency
Margin of
safety if Q is
Lowering the crossover not changed
frequency, increases the
We can define a safety margin
“Q” for this peak
242
How slope compensation (S) causes any disturbance in the inductor current waveform, to converge (Δ2 < Δ1)
Modified 1
Q
control level
mcD 0.5
Δ2 Se
where mc 1 ; D=1-D
Δ1 Sn
Disturbance
time
applied
L H VIN
D 0.34 (Buck) topolog y to get :
SlopeComp A/s 1
S L=VIN D 0.5 (Buck)
D 0.34 (Boost) Q
L H VO 1
SlopeComp A/s S L=VO D 0.5 (Boost)
Q
L H VIN VO
D 0.34 (Buck-Boost) 1
SlopeCompA/s S L= VIN VO D 0.5 (Buck-Boost)
Q
Figure 5.2: Minimum inductance related to a given slope compensation, to achieve Q < 2
243
Figure 5.3: How increasing the inductance can cause alternate pulsing too
As we saw in Chapter 4, eventually in a closed loop of cascaded blocks, it really does not matter
whether a block is considered within the plant or within the feedback stage. And that is
essentially what is also implied by writing T = G×H, which could instead be written as T = H ×
G, or in general as T = 𝚷 Gi. Any disturbance is attenuated by the correction factor 1/(1+T),
where T = 𝚷 Gi compared to its effect on the output if closed-loop feedback were not present!
244
The only critical element of any practical control loop is negative feedback—for creating
corrective action. The other stages can be viewed as “democratic” constituents of the closed
loop system from the viewpoint of the disturbance. It doesn’t matter where the gain or phase
contribution is coming from. We are only concerned with the net loop gain, i.e. “T” (with all dB’s
of cascaded stages added logarithmically and phases added arithmetically). That is why in the
previous chapter we had suggested not even bothering to allocate different symbols for the
plant and feedback, such as G and H, or H and G.
However, in this part we are still retaining the distinction between G and H, just for the purpose
of descriptive clarity. So, here G is still the plant and H the feedback block. As mentioned, in
literature, some reverse even that. Beware!
Note that phase shifts occur in both the blocks, G and H. Unfortunately, the plant gain/phase
profile, (G), is largely out of our control. On the other hand, the compensator (feedback block,
H) is almost completely in our hands. That is the only major difference between the two from
our viewpoint, in a closed loop system. We can adjust H to literally “compensate” for, or “tune
out”, any undesirable aspects of the plant gain/phase H. Hence this topic of study is often called
“loop compensation”.
The first “weakness” of the plant profile G, in terms of creating a desirable T profile, is that it
has a low DC gain—almost flat initially till a certain cutoff frequency determined by the
resonance of the inductance and output capacitor (LC). We can ask: what exactly is the low-
frequency/DC (flat) value of G? There are three simple equations to answer that, for each of the
three basic topologies. But we will focus only the buck regulator here. We can approximate that
equation further if the DC resistance of the inductor (DCR), the equivalent series resistance
(ESR) of the output capacitor are both almost zero, as follows:
VIN 1
G(s) 2
VRAMP s 1 s
1
0 Q 0
The gain contribution from the PWM comparator stage (part of G) is 1/VRAMP. Similarly, the
power (switching buck) stage (also part of G) provides the V IN term. The LC post filter (also part
of G) provides the rest of the plant gain above, i.e. the frequency-dependent part (involving s).
There are many ways the above equation is written out in literature actually.
1
GLC (s) LC
s s 1
2
RC 1LC
1
GLC (s) K
0
s2 s 02
Q
1
GLC (s) 2
s 1 s
1
0 Q 0
245
1
GLC (s) 2
s s
1 2
0 0
where O=1/(LC)1/2 . Note that above, K = O2. Q is the quality factor. We have several forms: Q
= (R/L)/O or Q = R × (C/L)1/2 , where R is the load resistor in the case of a switcher.
Alternatively, we can express this transfer function in terms of , the damping factor, where
=1/2Q.
Note: The LC post filter has no DC gain (0 dB). Any DC gain in the plant comes from the VIN/VRAMP
term, coming from the two other constituent stages. So to avoid confusion, it is best to use only the
last two forms above, which try to express the gain in terms of (s/ O). It then becomes clear that
the term is not causing any DC gain. Or if it is!
In other words, stick to this form:
1
GLC (s) 2
s 1 s
1
0 Q 0
Or for the plant (combining the post filter with the comparator and switch), this is the best way
to write it out:
VIN 1
G(s) 2
VRAMP s 1 s
1
0 Q 0
So for example, in our case, we can now clearly see that the only contribution to the DC gain
value of G above comes from the term V IN/VRAMP. If the PWM ramp is 1V, and the input voltage
is 12V, then the DC gain contribution from the plant, irrespective of the output voltage is 12/1 =
12. In terms of decibels, this is 21.5 dB. This amount of DC gain is usually insufficient to reject
low-frequency ripple or transients, as described in Part 1, and we therefore we have to use the
feedback network to somehow increase the product G×H = T fairly dramatically. The circuit to
do that is the integrator, as also discussed in Part 1.
At this point, we describe the behavior of the LC post filter, and provide more detailed forms of
the LC filter’s transfer function, now involving parasitics.
We are again going to look at this through our numerical example in Chapter 4 (also in A to Z,
Second Edition)
Example: Using a 300 kHz synchronous buck controller we wish to step-down 15V to 1V. The load
resistor is 0.2 (5A). The PWM ramp is 2.14V as per the datasheet of the part. The selected
inductor is 5H, and the output capacitor is 330F, with an ESR of 48m.
246
BODE PLOT (GAIN AND PHASE)
20dB
Q=10
Q=1
0dB
Gain Peaking ~ 20 log Q
Q=0.707
20 log 0.5 = -6 dB
Q=0.5
-20dB 20 log 0.707 = -3 dB
20 log 1 = 0 dB
-40dB
Frequency (Hz)
0
-20
Q=10
-40
(degrees)
Q=1
Phase
-90 Q=0.707
Q=0.5
-140
-160
-180
0.01 10
Frequency (Hz)
1
Quality Factor fPOLE =
2 π LC
Q RLOAD C
L
Figure 5.4: LC post-filter response as load is varied, assuming zero DCR and zero ESR
To start with we ignore ESR too. As we vary the load we get the filter response shown in Figure
5.3. Note the extremely abrupt phase shift of 180° at the break (resonant) frequency f LC and the
peaking in gain, especially as we increase the load resistor (increase Q). The load resistor is the
only dissipative term so far which is damping out the LC resonance. So if it were not present,
we would tend towards the following approximation.
VIN 1 VIN 1
G(s) 2
VRAMP s 1 s VRAMP s 2
1 1
0 Q 0 0
We reach a conclusion here: The resonant frequency is determined by the coefficient of the term
involving “s2”. The damping is determined by the coefficient of the term involving “s”.
In general, if we arrive at a second order equation of the type:
A s B(s) 1
2
247
We can conclude that the resonant frequency and Q are as follows
1 A
0 ;Q =
A B
If A is unity, as it usually is, then remember this:
Q is the reciprocal of the coefficient of the s-term.
Now let us add some ESR to this, but still at max load. The resultant gain function is shown in
Figure 5.4. It introduces an ESR-zero, and a good approximation of that is
s
1
V
G(s) IN ESR
VRAMP s 2 1 s
1
0 Q 0
Now let us set the ESR back to zero, and instead add DCR (of inductor) to the transfer function.
This is plotted out in Figure 5.5 for max load.
The detailed transfer function with no approximations (ESR and DCR included) is:
s
1
G LC (s) ESR
2
s ESR L ESR DCR
1 s DCR C 1 ESR C 1
0 R R R R
where O=1/(LC)1/2 as before, R is the load resistor and C is the output capacitor with some
ESR. L is the inductor with a certain DCR.
It is difficult to write a simple form for Q above.
We make the following conclusions:
a) There is an ESR-zero at the angular frequency ESR = 1/(ESR × C). So fESR = 1/2π(ESR ×
C).
b) The ESR starts dramatically changing the -2 slope of the LC double pole, closer to a -1
slope, as it approaches the LC break-point frequency from the right.
c) The ESR affects the break-point frequency, since it appears in the term involving s 2. The
DCR does so too, but only slightly through the term (1+DCR/R) at the end.
d) Both the ESR and the DCR affect the Q as expected.
248
1 VIN
G(s) 2
s 1 s VRAMP
Q 1
0 0
COUT
1 where Q=RLOAD L Ideal equation
fLC (no DCR, no ESR)
2 LC
fLC
(RLOAD = 0.2 Ω,
50
DCR = 0)
40
Plant Gain (dB)
Gsimpled b( f ) 30
Gaccdb( f )
20
FLC( f )
FESR( f )
10
10
ESR= 0.096 Ω
20
3 4 ESR= 0Ω ESR=
5 0.048 Ω
100 110 110 110
f
Frequency (log scale)
Figure 5.5: LC post-filter response as ESR of cap is varied, assuming zero DCR and MAX LOAD
249
1 VIN
G(s) 2
s 1 s VRAMP
1
0 Q 0
COUT
1 where Q=RLOAD L Ideal equation
fLC (no DCR, no ESR)
2 LC
fLC
(RLOAD = 0.2 Ω,
50 ESR 0.000005
ESR = 0)
DCR 0.048
40
Gsimpledb
Gsimpled b( f ) 30
Plant Gain (dB)
Gaccdb
Gaccdb( f ) DCR= 0 Ω DCR 0.048
20
FLC( f ) 6
ESR 5 10
FESR( f )
10 fLC 3.918 10
DCR= 0.096 Ω
0 Q 1.625
DCR= 0.048 Ω
Rload 0.2
10
20
3 4 5
100 110 110 110
f
Frequency (log scale)
Figure 5.6: LC post-filter response as DCR is varied, assuming zero ESR and MAX LOAD
Before we go any further with our analysis, we pause. The reason for the pause is we still need
to develop a certain “intuition” to succeed. We need to understand how the frequency domain
(s-plane) came about in the first place, and how the Laplace transform can help us. And how it
sometimes can’t!
Intuition in control loop theory is like the “taste of coffee”. You need to acquire it, since it is not
necessarily natural. Feedback in general, is definitely an area where prior intuition can actually
be misleading. Because for one, we are not intuitively cognizant of phase angles. That is perhaps
why, even the “basic” phenomenon of interference fringes from light beams, as demonstrated
by Thomas Young, came as late as 1801, and was still not accepted for years thereafter by his
peers. But it is that experiment which underlined the importance of phase angle, as in feedback
theory subsequently.
Our approach here is in a sense, to put the cart before the horse, and first understand the
behavior of some of the key underlying transfer (gain) functions in the “s-plane”. The s-plane
with all its real and imaginary, positive and negative frequencies is also, initially at least,
unintuitive. The good news is, after we feel comfortable navigating the s-plane, we will
hopefully develop a certain form of acquired intuition for control loop theory, which will see us
through the complexities. On to digital loops!
250
5.6 Logarithms are Natural
Note that while plotting the gain-phase curves, we may have discovered it was completely
equivalent whether we took the variable being plotted (gain or frequency), and used a log scale,
or took the log of the variable beforehand, and then plotted it on a linear scale. We remind
ourselves here that a 10× variation is the same as 20 dB. Similarly, a (1/10)× variation is the
same as -20dB. Similarly, a 10 dB variation is almost the same as a 3× variation, since 20 × log(3)
≈ 10 dB. Note that no one really talks of frequency in terms of decibels, so we have also refrained
from that here. Instead, we are using a log scale for frequency in our figures. However keep in
mind that if we had expressed frequency in decibels in the same manner as for gain (i.e. as
20×log f), we would have realized that for both gain and for frequency, a “20 dB” shift is
equivalent to a 10× variation, which is more commonly referred to as a “decade” shift when we
talk specifically about frequency instead of gain. In other words, intuitively, “20 dB/decade” or
slope of “1” is actually the same as 20 dB/20dB. Which in terms of tan(θ) is literally a slope of
1 (i.e. tan 45°, if the x and y-axis are equally proportioned). That is why +20dB/decade is often
called a “+1” slope. Similarly, -20dB/decade is a -1 slope. And -40dB/decade is a “-2” slope.
In other words, a typical slope of “-1”, i.e. -20 dB/decade, simply means that if the gain changes
by a factor of 10, so does the frequency, by the very same factor!
But isn’t that inverse proportionality?
In general, a variation in gain by any arbitrary factor “Z”, corresponds to a variation in the
frequency by the same factor “Z”— for a line with slope “-1”. For a line with slope “-2”, as for
the plant gain above the LC double pole corner frequency fLC, a variation of -40dB/decade, or “-
2” slope, simply means that if the frequency increases by a factor “Z” (say 2×), the gain falls by
a factor Z2 (4×). This is ∝ 1/f2. And so on.
Another way of expressing slope is to talk in terms of octaves of shift in frequency, instead of
decades. An octave is simply a doubling of frequency. But since 20 × log (2) = 6 dB, some
engineers prefer to say that a slope of “-1” is the same as -6 dB/octave instead of -20 dB/dec.
Which is also just -6 dB/6dB, or -1. Same as -20 dB/20dB, and so on.
We realize that what all this implies is:
a) If the gain is inversely proportional to frequency, i.e. gain ∝ 1/f, we get a slope of -1 (-20
dB/dec) on a log vs log scale. This usually comes from one reactive element (such as an
RC combination).
b) If the gain is inversely proportional to frequency2, i.e. gain ∝ 1/f2, we get a slope of -2 (-
40 dB/dec) on a log vs log scale. This usually comes from two reactive elements (such as
the LC double pole in VMC).
c) If the gain is proportional to frequency, i.e. gain ∝ f, we get a slope of +1 (+20 dB/dec)
on a log vs log scale.
d) If the gain is proportional to frequency2, i.e. gain ∝ f2, we get a slope of +2 (+40 dB/dec)
on a log vs log scale.
Logarithms are simply the most natural form of progressions in our world. Because they reflect
constant ratios! They are essentially “geometrical progressions”, the same as exponential
functions.
251
Example: Consider 10000 power supplies in the field with a failure rate of 10% every year. That
means in 2010 if we had 10000 working units, in 2011 we would have 10000 0.9= 9000 units. In
2012 we would have 9000 0.9= 8100 units left. In 2013 we would have 7290 units left, in 2014,
6561 units, and so on. If we plot these points --- 10000, 9000, 8100, 7290, 6561 and so on, versus
time, we will get the well-known decaying exponential function. See Figure 5.7 on the left side. We
have plotted the same curve twice: the curve on the right has a log scale on the vertical axis. Note
how it now looks like a straight line. It cannot however ever go down to zero!
The simplest and most obvious initial assumption of a constant failure rate has led to an
exponential curve. That is because the exponential curve is simply a succession of evenly spaced
data points (very close to each other), which are in geometric progression—i.e. the ratio of any
point to its preceding point is a constant number. So if x is the horizontal axis, we get the
function on the vertical axis as y(x) = a×a×a….(multiplied x times), i.e. y(x) = a x. Most natural
processes behave similarly. Such as radioactive decay (half-life), population etc. Note that all
curves based on geometrical progressions may see “exponential” to us, but to be accurate, the
truly exponential curve is the one where the ratio of the geometric progression is such that the
slope of the function ax at any point equals the function itself. In other words
d(a x )
a x only if a 2.72 (i.e." e ")
dx
This property of “e” provides a huge simplification when trying to solve differential equations,
which is why “e” is so ubiquitous.
We also recall that a logarithmic scale is simply a different scaled version of the exponential
scale, so they possess the same qualities. That is why if the log of any number is multiplied by
2.303, we get its natural log (“ln”). Conversely, if we divide the natural log by 2.303 we get its
log. This follows from
1
ln(10) 2.303 and 2.303
log(e)
10000 10000
Working units left
1000
100
2010
2015
2020
2010
2015
2020
Year Year
Figure 5.7: How geometrical progressions appear as straight lines on logarithmic scales
Relating this to our relationship with nature, we perceive loudness and brightness close to
logarithmic. We tend to perceive decibels, not factors. That way our senses can handle a very
wide range of sight and sound amplitudes, by “squeezing them together”—almost
logarithmically.
252
See the conversion tables for logs/decibels and factors provided in Figure 5.8.
The other great thing about using logs as mentioned previously is that if T = GH, we get log |T|=
log |G| + log |H|. We have simply written this out in shorthand as TdB = GdB + HdB, implying that
we can arithmetically sum the decibels, since it is now all logarithmic.
Next, we show the development of the Laplace transform.
253
5.7 Fourier Series to Laplace Transform
Let us start with what we learned in high school. We broke up a repetitive waveform into
discrete harmonic frequency components, analyzed the effect of each harmonic separately, then
summed them all up to reconstruct the result. The decomposition was in terms of the
fundamental frequency ω0. In addition there was a DC offset expressed by a0/2 (sometimes
called just a0 in literature).
The tricky thing was to apply that to switchers. We have to go from angles expressed in
(dimensionless) radians to time and frequency. Because sine and cosines can’t be applied
directly to time. Time is not dimensionless. However the conversions to use are indicated in
Figure 5.9. The key is the following transformation
t 2t
, so
2 T T
We can also write this as
2f t t
With this, we get for a time-repetitive waveform, as used in switchers:
a0
f(t) an cos(n 0t) bn sin(n 0t)
2 n1 n1
T
0
f(t)cos n 0t dt
an
0
T
bn 0 f(t)sin n 0t dt
0
In effect, we are going between time into a “frequency domain”, except that so far this frequency
domain is simply a discrete array of frequencies, separated by something we call the fundamental
frequency.
254
Figure 5.9: Going between angles and time in Fourier series for example
This technique was further developed into the complex Fourier series, by invoking the
exponential function. But it was still just a mathematical construct to simplify computations.
And it was based on the following well-known relationships
e j e j
sin
e j cos jsin 2j
e j cos jsin e j e j
cos
2
Note that in standard electrical analysis, we set = t as mentioned.
As an example, using the above equations, we can see derive the magnitude and phase of the
exponential function f() = ej as follows
sin
Argument(e j ) tan1 1
tan tan()
cos
255
So now we are operating with real and imaginary harmonic amplitudes. It is just a
mathematical construct though.This is called the "complex Fourier series".
In general:
f()= c n e jn
All the frequencies are a discrete interval apart, and can be both negative and positive.
In our case, we replace θ with ω0t. We can then solve for cn as follows
T T 2nt
0 jn0t 1 j
cn
2 f(t)e dt , or equivalently c n f(t )e
T
T dt
0 0
This is how we can go back and forth between the domain involving time (t) and frequency as
represented by the discrete array nω0.
There is no smooth one-dimensional, or two-dimensional spread of frequencies yet! That comes
next.
Historically, the next step was to try the same decomposition technique with non-repetitive
waveforms. This led to the Fourier transform method. Here the summation over discrete
frequencies changed to a smooth integration. The decomposition was as follows
1 j t
f(t )
2 F()e d
In other words, the real part of the Fourier transform is found by multiplying the time domain
signal by the cosine wave (“harmonic”) and then integrating (from -∞ to +∞). The complex part
is found by using the sine wave instead above.
256
However there are many functions (applied disturbances), such as a step function, which do
not give us a finite number for F(s) if we integrate it directly from -∞ to +∞ as in a Fourier
transform As a result, a new decomposition technique was introduced, called the Laplace
transform.
It attempted to add an exponentially decaying part to the integral, to force convergence. In
effect, we were doing this (where σ is a real number):
F(s) e st f (t)dt e t e jt f (t)dt Integral (exponential enevelope) (oscillatory) (function)
0 0
F() et e jt dt
0
F( ) f(t) et e jt dt f(t) e
j t
dt
0 0
F(s) f(t ) est dt where s= j
0
This is actually still the Fourier transform if σ = 0. The limits of integration have also changed a
bit, because in Laplace transform analysis, we typically always assume that there was no
signal/impulse/disturbance prior to t = 0.
The Laplace transform is given a new symbol L.
L{f(t)} F(s) f(t)est dt
0
Once again we can visualize this as just a “harmonic amplitude”, akin to cn in the complex
Fourier series. Of course, we no longer have harmonics, but a continuous spread of frequency
components.
We can also go backwards into the time domain by using
In other words, the real part of the Laplace transform is found by multiplying the time domain
signal by the cosine wave (“harmonic”) and then integrating (say from -∞ to +∞) and adding
the first term above to it. The complex part is found by using the sine wave instead above.
Note that σ is just the real part of the s-plane (horizontal axis). So as we stretch out to the right
or left side of the s-plane, we are in effect, including progressively higher envelope tailoring terms
to try and force convergence for successful integrations. We can “correct” for that when we go
back into the time domain by using the inverse Laplace transform.
257
Note that this technique still may not force convergence unconditionally, and so in a general
Laplace transform with a general stimulus/signal f(t), we may need to watch out for “regions of
convergence” (ROCs) too. It is out of scope here.
Note that now the s-plane can have both real and imaginary values, besides being positive or
negative. It is still best viewed as a mathematical construct. But it is now a two-dimensional
continuous spread of decomposed frequencies, and the relative phases of the decomposed
frequency components is accounted for by the use of two orthogonal axis (one real called σ, one
imaginary called jω).
However it can also be shown that when the time domain is entirely real, the upper half of the s-
plane is a mirror image of the lower half. In other words, any pole or zero will have a “reflected
version” below “sea level” (0dB axis). We will come to that soon.
Enough of math! Let us try to visualize what we really did. In Figure 5.10, we show that after
applying an exponential term to “precondition the harmonic amplitude”, with the amount of
preconditioning dependent on the horizontal distance away from the center, we land up with
vertical slivers of imaginary frequencies, which are basically weighted Fourier transforms as
indicated. These constitute the “harmonics”, which in this case are really part of a smooth
spectrum.
Finally, in Figure 5.11, we present a table of common Laplace transforms, akin to the
logarithmic tables we discussed in Part 1. The relative ease with which Laplace transforms can
be manipulated, depends on the fact that the drudgery was already done long ago while creating
these lookup tables. Note that the Laplace transform of the unit step function, as shown in
Figure 5.12, exists, and is equal to 1/s. With a Fourier transform, the integration would have
“exploded.”
To reiterate: the basic value of the Laplace transform in our case seems to be:
a) If we have an accurate model of the plant and compensator (in the frequency domain)
and have calculated the transfer function of each in the s-plane, then multiplying that
with an arbitrary impulse (expressed as a Laplace transform), such as a “stepped
reference”, we can figure out the response of the system.
b) Then going back into the time domain, we should be able to see the ringing on the output,
as a result of the applied disturbance.
But as Lloyd Dixon mentioned (see Part 1), our models for the plant and compensator may at
best be valid only for “small-signal” events. Do they apply if we slam the converter with a load
transient from 0 to max load? Or even to half max load?
If we do that, we actually discover there is another major problem, which none of our analysis
probably revealed. We may have encountered it in the past, but simply moved on by noting that
“the bench results were ‘mysteriously’ different from our theoretical calculations”. We blamed
it on “parasitics” and came out clean!
The truth is: to understand the output ringing under large-signal events, and then
suppress it, requires a rather different angle to our ongoing study, as we proceed to reveal
now. It goes beyond what Mr. Dixon had hinted upon when he talked about conditional
stability.
258
Figure 5.10: the Laplace transform decomposition process explained
259
f(t) F(s)
f(t) F(s)
f(t) F(s)
260
f(t ) 0 if t < 0
1 if 0
1
1
F(s ) 0
s
0
Time
In our ongoing attempt to acquire mathematical intuition, we now try to gain some mastery
over the gain-phase profiles of common functions in the s-plane. Because they often form the
building blocks of the transfer functions of the plant and compensator.
Historically, the existence of poles and zeros was explained in the following manner. A transfer
function can be eventually written out in the following general form:
V(s) a a 1s a 2s 2 a 3s 3 ....
T(s) k 0
U(s) a 0 a 1s a 2s 2 a 3s 3 ....
s
s s s s s s
1 1 1 ... 1 1 1 ...
Z0
z1 z 2 z3 Z1 Z2 Z3
K
s s s s s s s
1 1 1 ... 1 1 1 ...
P0 p1 p 2 p3 P1 P2 P3
The terms with “-” signs were troublesome as their “solutions” or locations were of the form s
= zn for zeros, or s = pn for poles. These were in the right half plane (of the s-plane) and it could
be shown that when we go over to the time domain, such terms tend to produce waveforms
with exponentially increasing amplitudes, which we want to avoid. In contrast, the terms with
a “+” sign seemed kosher, because their “solutions” or locations were of the form s = -Zn for
zeros, or s = -Pn for poles. These were in the left half plane (of the s-plane) and it could be shown
that when we go over to the time domain, such terms tend to produce waveforms with
exponentially decreasing amplitudes. So the effect of a disturbance eventually subsides, as we
desire.
However, the above way of writing the poles and zeros, though not wrong, can be misleading,
by virtue of what it hides. After all, the Pn could be negative! That would take it to the RHP! It
could be imaginary too. So how does it really behave? And so on.
Also consider the fact that the LC double pole has the form
A s B(s) 1
2
261
If we solve, we get
B B2 4A
s
2A
We thus have two “conjoined” solutions, and we could write this as
B B2 4A B B2 4A
s s
2A 2A
In the worst-case, 4A can exceed B2, the locations of the poles can be imaginary. And in that case
the solutions are complex conjugates of the form a+jb and a-jb. That means every pole has a
“reflection” below “sea level” (“sea level” being the 0dB axis!).
So, it is hard to see how, or even why, we would want to break the above LC pole transfer
function up into something like (s-x) × (s-y), as in the generalized form above.
5.9 Plotting some Transfer Functions that we may encounter (or may not!)
Realizing that there is more complexity than implied by the “generalized” pole-zero equation
above, we decide return to the basics and take a look at some common functions we are likely
to encounter. Using Mathcad, we will plot them out and see what they look like.
Six conceivable functions which produce poles are shown in Figure 5.13. Six similar functions
which produce zeros are shown in Figure 5.14. All the twelve plots are numbered sequentially
as indicated in the figures, and we will refer to those in the discussion below.
Finally, their locations in the s-plane are shown in Figure 5.15.
262
SOME FUNCTIONS WHICH LEAD TO POLES
1 Single pole at origin A Double pole at fCUTOFF
2
s s 1
USEFUL FOR CREATING HIGH DC PLANT GAIN FUNCTION
0 GAIN (INTEGRATOR) USING 0 APPLICABLE TO VOLTAGE MODE
FEEDBACK BLOCK CONTROL (LC POLE)
60 60
40 #1 π 40 #4 π
Phase angle, φ
Phase angle, φ
Magnitude, |G|
Magnitude, |G|
20 logA π/2
20 π/2 20
(radians)
(radians)
0 0 0 0
(dB)
(dB)
ω0
-20
ω0
-40 -π -40 -π
fCROSS fCUTOFF
-60 -60
0.1 1 10 100 103 104 105 0.1 1 10 100 103 104 105
Angular frequency, ω Angular frequency, ω
(rad/s) (rad/s)
60 60
40 #2 π 40 #5 π
Phase angle, φ
Phase angle, φ
Magnitude, |G|
Magnitude, |G|
20 logA
20 π/2 20 π/2
(radians)
(radians)
0 0 0 0
(dB)
(dB)
ω0
-20 -π/2 -20 -π/2
ω0
-40 -π -40 -π
fCROSS fCUTOFF
-60 -60
0.1 1 10 100 103 104 105 0.1 1 10 100 103 104 105
Angular frequency, ω Angular frequency, ω
(rad/s) (rad/s)
Phase angle, φ
Magnitude, |G|
Magnitude, |G|
20 logA 20 logA
20 π/2 20 π/2
(radians)
(radians)
0 0 0 0
(dB)
(dB)
ω0
-40 -π -40 -π
fCUTOFF fCUTOFF
-60 -60
3 4 5
0.1 1 10 100 10 10 10 0.1 1 10 100 103 104 105
Angular frequency, ω Angular frequency, ω
(rad/s) (rad/s)
263
SOME FUNCTIONS WHICH LEAD TO ZEROS
Single pole
zero at origin 2
Double zero at fCUTOFF
s
A s 1
0
0 FOR COMPENSATING VOLTAGE
MODE PLANT GAIN DOUBLE POLE
USING FEEDBACK BLOCK
60 60
40 π 40 π
Phase angle, φ
Magnitude, |G|
Phase angle, φ
Magnitude, |G|
20 logA
π/2 20 π/2
(radians)
20
(radians)
0 0
(dB)
0 0
(dB)
-20 -π/2
ω0
-20 -π/2
-40 ω0 #7 -π -40
fCUTOFF
#10 -π
fCROSS -60
-60
0.1 1 10 100 103 104 105 0.1 1 10 100 103 104 105
60 60
40 π 40 π
Phase angle, φ
Magnitude, |G|
Phase angle, φ
Magnitude, |G|
20 logA π/2
π/2 20
(radians)
20
(radians)
0 0
(dB)
0 0
(dB)
ω0
-20 -π/2 -20 -π/2
#8 #11
ω0
-40 -π -40 -π
fCROSS fCUTOFF
-60 -60
0.1 1 10 100 103 104 105 0.1 1 10 100 103 104 105
Phase angle, φ
Magnitude, |G|
Magnitude, |G|
20 logA 20 logA
20 π/2 20 π/2
(radians)
(radians)
0 0 0 0
(dB)
(dB)
ω0
264
Figure 5.15: Locations of the zeros (“O”) and poles (“X”) of the two previous figures
1
H(s)
s
0
This has infinite DC gain, and falls off at the rate of -20dB/dec (“-1”). It crosses over at an
(angular) frequency ω0, or equivalently, a frequency f0 = ω0/2π. It is a simple (first-order) “pole-
at-origin” or “pole-at-zero. It happens to be the integrator function we discussed in previously.
Also see its implementation in Figure 5.15 (of this part). It is an inevitable part of any analog
compensator.
This function has an associated phase angle of -90°, so combined with the -180° from the
negative feedback (inverting), we get -270°, which gives us 360-270 = 90° phase margin, as
mentioned previously. That is why this is the preferred shape for the final loop gain T too.
265
Figure 5.16: How the integrator is created in a Type 1 compensator
1
H(s) 2
s
0
The reason this is not useful to us is that the net phase lag from it is 180°, so it would give no
phase margin at all. We ignore it.
3) Plot #3 is of the form
266
1
G(s)
s 1
0
The location of the pole is the frequency at which the gain function intuitively “explodes” when
the denominator is zero. This occurs at the following frequency.
s 1. So s
0
0
We know by now that in the most general representation, s = σ + jω, where σ is the real part of
the frequency and ω its imaginary component. The s-plane is typically drawn with a vertical
axis of jω and a horizontal axis of σ.
So this function produces a simple (first-order) pole at -ω0, which is along the real axis in the
left half plane (LHP). See its location in Figure 5.13.
Note that since LHP poles and zeros are the ones we commonly run into, the default is
considered to be LHP, unless otherwise stated, i.e. as “RHP” (right half plane).
Note also that at this location, the gain function is said to “explode” because the denominator is
zero. But that is only intuitively correct. In reality, because we are dealing with imaginary
numbers, the magnitude of the gain is not infinite at this frequency. If we plot it out, we will
discover it actually rolls off—exactly as shown in Figure 5.13.
4) Plot #4 is of the form
1
G(s) 2
s 1
0
Basically we “allowed” very small, non-zero values for “v”. Then we plotted it out.
267
Alternatively, using Q, we can write the above equation as
1
G (s) 2
s 1 s 1
0 Q 0
RLOAD 1 #4
Cs
v RLOAD 1 At open-load condition (RLOAD = ) this is
Transfer function o Cs
vi RLOAD 1 Transfer Function
1
1
(Plot #4)
Ls RIND Cs
RLOAD 1 s2LC 1 s
2
Cs 1
1 0
Transfer Function 0 1
2 L R where f0 (break frequency/cutoff frequency)
s LC s RINDC IND 1 2 2 LC
R
LOAD R
LOAD
If inductor resistance (DCR) is 0, we get
Undamped (high-Q) transfer function with severe peaking
1 1
Transfer Function 2 and abrupt phase shift at f0 .
L s 1 s
1
2
s LC s
1
Quality Factor RLOAD
0 Q 0
For other load conditions, we get a
Q RLOAD C damped response based on non-infinite Q
L
1
G(s)
s 1
0
It has a solution (location) at s = ω0. It is a RHP pole, so unlike regular (LHP) poles whose phase
falls past the break frequency, for this it rises. It is virtually incompatible with the LHP poles
and LHP zeros seen in most switchers. We therefore ignore this pole.
6) Plot #6 is of the form
268
1
G(s) 2
s 1
0
This does produce two poles, but in fact we can consider them as two coincident poles, one RHP
and one LHP, by factoring it out the denominator as follows
s s
0 1 0 1
The phase contributions from both are opposite so there is no net phase shift as we see from
Figure 5.13. There is a net gain, but why resort to unnecessary complication to produce DC gain?
7) Plot #7: this is the “opposite” of #1. It is a zero-at-origin compared to pole-at-origin. It
lowers the DC gain, and is of no practical use to us. We ignore it.
8) Plot #8: this is just two of #7, i.e. two coincident zeros-at-origin. We ignore it too, for
the same reason as #7.
9) Plot #9: This is a first-order (simple) LHP zero of the form
H (s) s 1
0
We could place two of these single zeros at the LC pole frequency to try and cancel it out. That
is exactly what we do in standard Type 3 compensation as detailed in Part 1.
10) Plot #10 is the complement of the double pole peaky LC response is the double zero
gain dip. Once again, we needed to add a term involving Q to be able to plot it out.
2
1
H(s) s s 1
0 Q 0
Thus function can theoretically be used to provide the two zeros in the feedback block at the
exact position of the double LC pole coming from the plant. Hence we used the symbol “H” not
“G” for it here. This tells us something that is rarely discussed:
Just as we talk about the Q of the plant, why can’t we talk about the Q of the compensator
too? That could also be a way to enact our standard compensation strategy, instead of just using
two zeros of the type plot #9, as discussed above.
Far more on the use of this function (plot #10) will follow soon.
11) Plot #11 is of the form
G (s) s 1
0
Plotting its magnitude and phase, we see that it is a zero, because the gain starts sloping
upwards eventually. But this increase in gain is accompanied by a decrease in phase, not an
increase as for a “regular” LHP zero, shown in plot #9. We try to avoid LHP poles and zeros as
far as we can, because they tend to produce escalating responses when we try to go back to the
time domain. More on this later.
Unfortunately, this RHP zero does appear in the plant of boost and buck-boost topologies, but
not for the buck. Intuitively, that zero is often explained by saying that in a boost and buck-
269
boost, energy is delivered to the output only during the switch OFF time. But if for example
there is a sudden load increase, and the output dips, the control loop responds by increasing
the switch ON-time in an effort to build up more energy in it as required. Unfortunately that
reduces the time for the energy to be delivered to the output, so that dips even further
momentarily.
LHP zeros are very hard to stabilize or avoid, and usually the only solution is to roll off the loop
gain at a much lower frequency than we usually do for a buck.
We have understood the behavior of common transfer functions in the frequency domain. We
know the locations of their poles and zeros. We can recognize some of them from our previous
experience dealing with analog control loops.
We are also now keenly aware of the fact that not just the phase shift (if any), say at the resonant
frequency, is of importance, but the actual (net) phase angle too—so as to prevent 180° shift
from ever occurring with an accompanying gain of unity (0dB) or greater. (That is another thing
glossed over in related literature sometimes). For example, the function “1/(s/ω 0)” has a
constant phase lag of 90°, and its “breakpoint” is at a very low un-definable/un-plottable,
literally invisibly low frequency. But because of its seemingly frequency-independent, flat
phase lag of 90°, it leaves us with only 90° to play around with in terms of phase margin. Which
is why we must cancel out the pole coming from the plant, whether it is VMC (180° phase lag
from L and COUT) or current-mode control (90° phase lag from RLOAD and COUT).
We also learn to avoid RHP poles/zeros as far as possible. They tend to give escalating
responses in the time domain, whereas the LHP tends to give responses that decay, as we desire.
To show this more clearly, let us start with the definition of the Laplace transform
e
st
F(s) f ( t)dt
0
270
We calculate the Laplace transform of the exponentially decaying function e-at (starting from
zero at t = 0, as we always assume in normal Laplace transform analysis). It turns out to be
1/(s+a). See also the tables in Figure 5.11. Equivalently stated, the inverse Laplace transform of
the step function 1/(s+a) is e-at, which is a nice exponentially decaying function—for any “a”
that is real and positive.
But what exactly is the function 1/(s+a)? It is by definition, a pole at frequency “-a”—which is
in the left half of the s-plane, assuming “a” is real and positive. We therefore realize that the unit
step function (1/s or 1/(s+a)) corresponds to a “well-behaved” function in the time domain,
one which decays exponentially with time.
In power supplies, the poles and zeros we deal with are usually on the left half plane. We simply
try to avoid the right half plane (RHP) altogether. We thus lower the bandwidth (crossover
frequency) significantly, in boost and buck-boost topologies, where the notorious RHP zero
emerges. There is almost no other solution to this particular RHP problem.
With this new understanding, the basic idea behind classic analog control loop compensation
in a voltage mode controlled buck is therefore:
A) Create a pole at the origin (p0) in the feedback block (using plot # 1)
B) Note the location of the double pole at the LC resonant frequency (plot #4)
C) Place two (LHP) zeros (z1 and z2) at the LC pole frequency (two of plot #9)
D) Note the location of the ESR-zero (plot #9)
E) Place a simple (LHP) pole (p1) at the location of the ESR-zero (plot #3)
F) Place a simple high-frequency (LHP) pole (p2) at fCROSS, 10×fCROSS or fSW/2 (plot #3)
Refer to the standard analog compensation strategy shown in Figure 5.18.
In our version of digital compensation, we replace step C above with this new step (to be
discussed in detail soon):
C) Place one second-order zero at the LC pole frequency (plot #10)
271
Figure 5.18: Analog compensation strategy
Certain concepts we may have realized from the previous pages after plotting out various
functions are captured more clearly in Figure 5.19. For example, if poles and zeros lie along the
imaginary axis, we get a “peaky” response (in theory, an infinite peak exactly along the vertical
axis). Also, if the poles and zeros lie along the real axis, we get a “damped” response.
Looking at Figure 5.19, we try to understand which term in the transfer function leads to the
“peaking”. As mentioned, we had actually added a small (yet undeclared) term in “s” to be able
to Plot #4 in the first place. Otherwise the peaking would have been infinite and the associated
phase shift extremely abrupt at the resonant frequency ω 0.
The transfer function of the LC post-filter stage of the plant (in VMC), with this “s” term
highlighted is as follows. It is essentially plot #4 with a certain Q included.
272
1
G (s) 2
s 1 s 1
0 Q 0
From the definition of ω0 (=1/(LC)1/2), and Q (= RLOAD× (C/L)1/2), we conclude that the magnitude
of the term involving “s” does not (noticeably) affect the resonant frequency, which is determined
(almost) solely by the term involving “s2”, but it does determine the “peakiness” of the response at
that resonant frequency.
In general, if the coefficient of s is small, the “peakiness” is much higher.
We have a similar but inverted plot (for zeros) in Plot #10. And so we realize we may need to
start thinking in terms of the Q of the compensator too, not just the plant.
Gain_dB
log f
jω
Gain_dB
σ
log f
f(t)
produce
de-escalating escalating
time time time-
time-
dependent s-plane dependent
responses (s=σ+jω) responses
Poles
Zeros
Figure 5.19: Summary of behavior of poles and zeros of common transfer functions
273
5.13 Limits of Analog Compensation
Despite our presumed familiarity with analog control, and even accepting the fact that we can’t
change the compensation “on the fly”, analog loops are difficult to tweak as detailed in Chapter
4. We can do that far more easily in digital implementations. Also, analog compensation is
inherently limited in a certain way which we will soon describe, though that is rarely clarified
in related literature.
In Figure 5.20, we once again present the equations and typical strategy for Type 3
compensation as used in voltage mode controlled regulators. As discussed earlier, the key is
introducing two coincident zeros in the compensator, at the exact position of the double LC-
pole of the plant. The purpose of that is to “cancel” the double pole. We also require a pole at
the origin, to get the DC gain high, for rejecting low-frequency disturbances in particular. But
we also roll-off the loop gain to avoid oscillations.
What we really and unavoidably require from the compensator is one pole-at-origin and two
zeros. However the way a Type 3 compensation is set up, using three capacitors and two
resistors, there is a lot of interaction between all the R’s and C’s, so we end up with more than
that.
The following is the well-known gain function of a Type 3 compensator:
sC2 R1 R3 1 sC1R 2 1
H(s)
sR1C1 sC2R3 1 sR 2C3 1
Equivalently
s s s s s s s
1 1 1 ... 1 1 1 ...
V(s) Z0
z1 z 2 z3 Z1 Z2 Z3
T(s) K
U(s) s s s s s s s
1 1 1 ... 1 1 1 ...
P0 p1 p 2 p3 P1 P2 P3
We thus realize that Type 3 compensation gives us two normal (LHP) zeros as desired, but also
two normal (LHP) poles, besides the much-desired pole-at-origin. Note that all the poles and
zeros are based on RC values and are therefore “single-order”. They will provide an upward or
downward slope of 20dB/decade past the point defined by the applicable break frequency (ω X
= 1/RC). There will be no “peakiness” however, because these frequency locations are along the
real axis (see Figure 5.19).
The thing to note here is we need to do something with the two extra poles, even if we don’t think
we need them. One obvious use of one of the “extra poles” is to cancel out the ESR-zero coming
from the plant. This may however be at a very high frequency nowadays, because of the low
ESR of modern ceramic capacitors. So it may be irrelevant to cancel.
274
The remaining “extra pole” is a subject of some debate as indicated in the figure. Some suggest
putting it at fsw/2, others at fsw/10. At the Unitrode Seminar in Germany in 1996, Lloyd Dixon
suggested putting it at the crossover frequency, fCROSS.
Another problem is, as we know from Part 1 is that we need to simplify the actual transfer
function of a Type 3 compensator to make it usable—by introducing the approximation
C1>>C3. Which implies, we already have an inherent error in all our calculations. Besides that,
we could have an additional +, - 10% tolerance, just based on the nominal value of available
caps. And so on. There is barely any precision in analog compensation.
Besides that, how difficult is it to tweak the poles and zeros? We discussed that in Part 1 too.
The overall answer is very difficult.
But the biggest problem with standard Type 3 analog compensation is yet to come! We now
discuss that.
275
5.14 Unrecognized Effects of "Q-Mismatch" in Type 3 Analog Compensators
One of the most common and erroneous assumptions of loop compensation is the statement:
“We can place two single-order zeros at the same frequency location, and that is equivalent
to a double zero”.
This is what we always implicitly assume when we place two zeros in the compensator to “kill”
the LC double pole of the plant. But that cancellation is actually only partial as we will
discover.
Suppose we place two single zeros at the exact same frequency location. That gives us the
following transfer function
2
s s2 s
H(s) 1 2 2 1
Z Z Z
Whereas, we know that the plant gives us the following transfer function
1
G (s) 2
s 1 s 1
0 Q 0
For the two zeros to cancel the pole, we need to have the two cancel each other out
s2 s
2 2 1
Z Z
1
2
1
s s 1
0 Q 0
Clearly that happens only for one condition: Q of plant equal to 0.5. Otherwise not.
Because as load changes, the Q of the plant can vary from small values (at max load) to very
large values (light loads).
In other words, to cancel each other out, not only the LC pole frequency (ω 0 or ωLC in this case)
must equal the frequency of the two coincident zeros (ω Z), but the crucial terms involving “s”,
which determines the amount of damping as discussed, must be equal. And under light load
conditions, Q of the plant can become very high, so the “s” term becomes very small, and so the
LC peaking is very severe. Whereas in the analog compensator with coincident zeros, we have
the factor “2” in the term involving s, which is far from negligible at the resonant frequency ω Z.
As a result of that, the response of the compensator is extremely well-damped. But not the plant’s
response. How can they hope to cancel each other out?
Note that in the following article,
http://powerelectronics.com/power_systems/simulation_modeling/Transient-response-
phase-margin-PET.pdf , the author, Mr. Basso, tweaked the plant gain (via simulations as usual),
to get a Q of 0.5, and noted that it gave the best transient response. But instead of recognizing
that it was simply a case of Q-matching, because the analog compensator had the same Q=0.5,
he seems to have attributed it to the phase margin (76°) at that point, and thus declared that
76° was the optimum phase margin (despite the rest of the industry saying it should be 45°).
276
Actually, this was almost clear evidence that by simply matching the Q of the plant and the
compensator, we get the best transient response because the pole-zero cancelation is near-
perfect. Of course it is not realistic to set Q of the plant to 0.5, because that occurs for only one
specific load. A better approach would be to let Q of the plant be what it is, and try to adjust the
Q of the compensator to match the Q of the plant, for a wide range of loads. That is what we can
accomplish very easily using digital control.
We have just stumbled upon the topic of “Q-mismatch”. Because the Q of the plant can vary
anywhere from decimal values at max load to very big numbers at light loads. Whereas the Q of
the compensator (and we almost forgot to even ask what it was), is fixed!
At what value?
Looking at the s-terms:
1 s
compared with 2 s
Q 0 Z
we realize the Q of a Type 3 analog compensator (with coincident zeros) is always 0.5, irrespective
of the R and C’s used in the feedback network.
This is the major issue plaguing standard analog compensators—its inability to cancel the LC
pole properly, leading to conditional stability, which as it turns out, is not as harmless as we may
have thought.
Before we discuss conditional stability again, note that there is another way to state the Q-
mismatch issue. In terms of the functions we plotted in Figure 5.13 and Figure 5.14, we realize
we are essentially guilty of trying to cancel the peaky LC double pole and its complex conjugate
poles (spread vertically parallel to the imaginary axis, above and below “sea level”, i.e. the 0 dB
axis), with two simple zeros, both completely constrained on the real axis. How can they ever
cancel each other out totally, if they are not located on top of each other in the s-plane?
The pole-zero “cancelation” is barely perfect in standard analog techniques, and as a result the
loop gain shows severe peaking and “wobbling” of phase around the LC pole frequency. See Figure
5.21
But why is that a problem? As mentioned in Part 1, this is called “conditional stability”. Lloyd
Dixon warned us a bit about that in reference to gain collapsing. No one really seems to have
realized it was a problem related to the output ringing under large-signal transients.
But now let us look at the evidence. We took a buck converter with the following parameters:
L= 330 nH, C = 546 μF, ESR = 520 μΩ, DCR = 8.53 mΩ ,VIN = 12V , VOUT = 1.2V, IOMAX = 5A
Its calculated LC pole frequency is 11.86 kHz. Next, we applied a 0-2A load transient. The
waveform in Figure 5.22 emerged. Eyeballing the output, the ringing is at ~12.56 kHz, very
close to the theoretically calculated LC pole frequency. And these are not simulations!
Question: Why do we see ringing at frequencies close to the LC resonant frequency, not at the
supposed “likeliest” frequency at which instability occurs, i.e. f CROSS?
277
We could argue along the lines of Lloyd Dixon here: that under large-signal events such as this,
the inductor is unable to provide power temporarily for several cycles as the current ramps up
to its new value (the “inductor reinitialization” problem mentioned in Chapter 4). As a result
the gain collapses and fCROSS temporarily falls close to the LC pole frequency, triggering
temporary oscillations at that frequency. The steep wobble in the phase at the LC pole
frequency, especially at light loads, as shown in Figure 5.21, is clearly not helping us.
We conclude that conditional stability is not as harmless as made out to be commonly.
And perhaps none of this will ever emerge if we continue relying on simulations using small-
signal models. That was also the opinion voiced by Lloyd Dixon too, in 1996: “…there has been
a lack of balance and a tendency to try to force behavior that is uniquely related to switching
phenomena into linear equivalent models (with sometimes uncertain results). Many of the major
significant problems with switching power supplies do not show up in the frequency domain, or in
the time domain using averaged models, unless these problems are anticipated in advance and
provided for in the models. Simulation in the time domain using switched models, although
slower, reveals these problems that would have been hidden."
278
60 60
50 50
40 40
30 30
Gdb( f ) Gdb( f )
Hdb( f ) 20 Hdb( f ) 20
OPENdb ( f ) OPENdb ( f )
10 10
0 0
10 10
20 20
3 4 5 6 3 4 5 6
10 100 110 110 110 110 10 100 110 110 110 110
f f
180 180
135 135
90 90
180 180
arg( G( f ) ) 45 arg( G( f ) )
45
180 180
arg( H ( f ) ) 0 arg( H ( f ) )
0
180 180
arg( OPEN ( f ) ) arg( OPEN ( f ) )
45 45
90 90
135 135
180 180
3 4 5 6
10 100 110 110 110 110 10 100
3
110
4
110
5
110
6
110
f f
279
Figure 5.22: Output ringing analysis
Let us use Mathcad to do some mathematical explorations on a Type 3 compensator. Its transfer
function (for C1>>C3) is
sC2 R1 R3 1 sC1R 2 1
H(s)
sR1C1 sC2R3 1 sR 2C3 1
Equivalently
280
To prove this, in Figure 5.23 , we compare the light-load response of a standard analog
compensator to the response that is obtained by forcibly tweaking it by setting v = 0.01. We see
an almost complete cancelation of the LC pole occurring now, in terms of both the loop gain
magnitude and phase. Most importantly, conditional stability is significantly reduced if not
wiped out. We expect a much better transient response too now, as we will soon see.
Indeed, this was just mathematical. There is no easy way to implement it using analog control.
However, we will now proceed to show that digital control provides us the capability to implement
such a unique transfer function for the compensator.
In brief: Instead of putting two coincident single-order zeros based on plot #9 at the LC
pole, we can use digital techniques to produce a second-order zero with a more appropriate
response curve based on plot #10.
60 60
50 50
40 40
30 Gdb( f ) 30
Gdb( f ) Gdb( f )
Hdb ( f )
Hdb ( f ) Hdb ( f )
20 OPENdb ( f ) 20
OPENdb ( f ) OPENdb ( f )
OPENdb ( f )
10 10
0 0
10 10
20 20
3 4 5 6 3 4 5 6
10 100 110 110 110 110 10 100 110 110 110 110
f f
180
180
135
135
90
90
180
180 arg ( G( f ) )
arg ( G( f ) ) 45
180
arg ( G( f ) )
180 45
180 arg ( H ( f ) )
arg ( H ( f ) )
180
0
arg ( H ( f ) ) 0
180
180 arg ( OPEN ( f ) )
arg ( OPEN ( f ) )
180
45 arg ( OPEN ( f ) )
45
180
arg ( OPEN ( f ) )
90
90
135
135
180
10 100
3
110
4
110
5
110
6
110 180
3 4 5 6
f 10 100 110 110 110 110
f
281
5.18 What is Damping?
Let us now define a general transfer function and see when we get an imaginary component to
the solution.
0
s 2 1 s
H(s)
Q 0
1
The solutions (locations of the zeros) are given by setting the above to zero. Solving
s = - 2Q1
0 4Q
1
2
1
1 1
s = - 1 0
2Q 4Q 2
1 1
4 or 2 or Q 0.5
2 Q
Q
And imaginary solutions if Q > 0.5. So Q = 0.5 is called critical damping. It corresponds to two
coincident zeros, exactly as in classic Type 3 analog compensation, as per our strategy.
Note that using digital control, it is easy not only to force the Q of the compensator to values
greater than 0.5, but less than 0.5 too. In the latter case, the compensator would be considered
over-damped. The solutions to the general second-order equation above, in the case of Q<0.5
would be considered overdamped. If Q>0.5, it is underdamped. The latter occurs the moment
an imaginary part enters the solution. We then get “peakiness” (or “dippiness”) as sketched in
Figure 5.15 and Figure 5.19
In literature, the damping factor is often used instead of Q. It is ζ=1/2Q. So critical damping
corresponds to Q=0.5 or ζ=1.
When Q>0.5, we will start getting two zero locations, corresponding to split zeros. These will be
symmetrical about the 0dB-axis (above and below “sea level”). They are complex conjugates of the
form a+jb and a-jb.
We will plot these out shortly for all Q.
282
5.20 A Useful Hint: Impedance of a Capacitor
Let us take another function that often appears in electronics in general. It is the impedance of
a capacitor! If we look across its terminals we see the series combination of its series inductance
(ESL), its series resistance (ESR), and its bulk capacitance (COUT). So its impedance is
1
Z(s) (ESR) (ESL)s
(COUT )s
We have a constant (frequency-independent) term “ESR”, another term which is proportional
to frequency (ESL) and another which is inversely proportional to frequency (1/C OUT). Just to
be more general here, let us write this as
V
F(s) U Ws
s
In our case here, this means that U=ESR, V = 1/COUT and W = ESL. The reason we are writing
this out in a general form is, we see this type of function in different forms everywhere.
Simplifying (to our preferred form), we get
2
s U s
1
2 V VW V
Us V Ws W W
F(s)
s s 1
V
The denominator is the equation of an integrator (plot #1 in Figure 5.13). The numerator is in
the (modified) form of plot #10 in Figure 5.14Errore. L'origine riferimento non è stata
trovata.. The “Q” of the capacitor impedance plot can also be defined as
1 1 ESL
Q CAP _ Z VW
U C
ESR OUT
This being a series RCL circuit, the Q is just the reciprocal of the parallel resonant circuit that
constituted the LC post filter, where we had
C OUT
Q PLANT R LOAD
L
Nevertheless, the analogy is striking.
The analogy carries over to the transfer function of a Type 3 compensator too. Let us look at
that too.
s2 C1C2 R 1 R 3 R 2 s C2 R 1 R 3 C1R 2 1
H(s)
sR 1C1 sC2R 3 1 sR 2C3 1
We learned that if we put the two zeros coincident, we get from
283
1
fz1
2 R1 R3 C2
and
1
fz2
2R 2 C1
C1 R 1 R 3
Condition for coincident zeros
C2 R2
R R C
sR 1C1 s 2 3 1 1 s R 2C3 1
R1 R3
As mentioned before, the two poles from the Type 3 function may be one too many. What we
unavoidably need is the pole-at-origin (1/s term) and the two zeros (numerator). So let us move
p1 and p2 out of the way for now. In digital control we can always bring in additional poles,
almost at will. In our case here this is equivalent to shorting R3 and removing C3. See what it
does to a Type 3 compensator in Figure 5.20. Now, assuming we have moved those two poles
out of the way, we are left with a simplified Type 3 compensator transfer function
2
s 1 s
1
2 2 1
s C1R 2 s 2C1R 2 1 C R 2 1
H(s) C1R 2
1 2
sR 1C1
s
1
C1R 1
Compare this with the general equation involving U, V and W, we see they are similar, except
for the following key fact:
The Q of the general function is √(VW) /U and it can have any value virtually, whereas the
Q of a Type 3 compensator is 0.5. As mentioned, that is a key limitation of a Type 3 analog
compensator.
Other than that, based on the similarity, we conclude that impedance of a capacitor and gain
(transfer) functions of compensators can be visualized very similarly.
If only we had some way of changing the Q of the compensator to a value of our choice, to induce
Q-matching!
W also realize that there will be the equivalent of a “pole-at-origin” in the impedance plot, and
it will crossover at frequency ωp0=V Hz, or equivalently fp0=V/2π. From the numerator we will
get two zeros at the frequency ω0 = √(V/W), or equivalently ω0 = √[(V/W)/2π]. Also the
damping will depend on the factor Q = √(VW/U2). If this is less than or equal to 0.5, we will get
a critically or over-damped response. If not, we will get imaginary solutions with corresponding
284
peaky responses. In the latter we will get two split zeros, both on the LHP side, and symmetrical
about the real axis, corresponding to the two complex conjugate solutions
1 1
s = - 1 0
2Q 4Q 2
The average of these, computed logarithmically, is still ω0, the location where the two zeros
were coincident, for Q=0.5.
f AVG 10 2
f0
b) Now if Q > 0.5, what happens? We plot this out using Mathcad too. The results are
presented in Figure 5.25. We can see the complex conjugates, above and below “sea
level” (0dB), as mentioned previously. Note that the actual location of the cusp in our
real world, say observed on a scope, correlates to the radial distance away from the
origin of the s-plane, and this remains fixed as the zeros separate vertically—for the
simple reason that the break frequency, as determined by the term involving “s2”, has still
remained the same.
We have now a complete visualization of how a function of the following type behaves
V
F(s) U Ws
s
285
Figure 5.24: Spreading of zeros as Q falls below 0.5
286
Figure 5.25: How zeros split up as Q exceeds 0.5
287
5.21 Introduction to PID Coefficients
Books have been written on trying to “intuitively” explain proportional, integral, derivative
(PID) coefficients. Various statements have been made in literature, about the number of ways
the loop can react once an error is detected. For example, it could look at the absolute error (on
an instantaneous basis) and react proportionally to the error, or look at it over a period of time
and this react to its integral (over time), or look how fast it changed and react to it based on its
rate of change (derivative term), and so on. They all are correct. But as pointed out earlier,
“intuition” based on mechanical systems is not very meaningful when we come to switchers.
That is the reason we are actually going to resort to some limited math, based on our previous
understanding of the behavior of certain transfer functions, to explain “PID coefficients” here.
And the results are startling, even from a purely mathematical viewpoint, as we will now see.
In a switcher, we could theoretically combine all three types of responses in parallel, to give us
a “PID compensator”, as shown in Figure 5.26. Its behavior, using Mathcad, is typically
presented in Figure 5.27In the time-domain (as measured on a scope), the effect of each is often
portrayed as in Figure 5.28. For example, a high kI reduces the DC settling error, as talked about
in Part 1. That error, we realized, was largely controlled by the integrator section! And that had
a transfer function based on the 1/s function (plot #1 in Figure 5.13Errore. L'origine
riferimento non è stata trovata.). So it is no surprise to learn that the integral coefficient, kI,
is essentially the integrator section of the PID compensator!
We must emphasize that implementation of PID coefficients is possible even with analog
techniques using multiple op-amps for example, but its implementation using digital
techniques is far more precise and easy. Not to mention: powerful. Therefore, PID compensators
have tended to become synonymous with digital control.
Here is the transfer (gain) function of a typical PID compensator:
k
H(s) k P I k Ds
s
where kP, kI and kD are the “PID” (proportional-integral-derivative) coefficients, respectively.
We immediately realize that the transfer function of a PID compensator is also analogous to the
impedance plot of the capacitor, discussed previously. Clearly, like the impedance equation,
this one is also similar to the general function we discussed above:
V
F(s) U Ws
s
And that takes us to the “grand analogy”, as discovered one afternoon by Sanjaya…
288
Figure 5.26: PID control
289
Figure 5.27: PID response characteristics
290
5.22 The Grand Analogy
See Figure 5.29 in detail now. It presents the analogy between the transfer function using basic
digital (PID) compensators, and the impedance plot of a capacitor! In Figure 5.30, we present a
numerical example to show that:
If we fix kI and kD, i.e. V and W, or 1/COUT and ESL, respectively, and only vary kP, i.e. ESR or
U, we manage to tune Q! The “dippiness” of the curve changes, but not the resonant/break
frequency.
And since we can vary Q by simply varying kP (or U), and in the process not changing the
location of the “two zeros” (ω0= √(V/W) , we can get the Q of the compensator (√(VW) /U) to
match the Q of the plant (RLOAD√(COUT/L), this ensuring proper cancelation of the LC pole, no
conditional stability concerns, and much reduced ringing in the output under large-signal
transients.
This is the scheme of things we are focusing on. We have a potentially far more powerful
compensation technique in our hands today, thanks to digital techniques.
In Figure 5.31, we present the full analogy between a capacitor’s impedance and a PID
compensator. Such capacitor impedance curves are available in most capacitor datasheets. But
we have now seen that the PID compensator is really no different in terms of visualizing
matters! We do not need pages on building physical intuition. For once, the math speaks louder.
And once we understand that math, our ability to manipulate PID coefficients, to do what our
typical Type 3 analog compensator was doing, and much more, goes up exponentially. Digital
techniques just help us immensely in implementing these compensators based on PID
coefficients.
Other Definitions:
Quality factor Q VW 1 ESL k Ik D
Critical damping Q = 0.5
U ESR COUT kP
Damping factor ζ = 1/2Q
crossover of pole-
V/2π 1/2πCOUT kI/2π
at-origin, fp0 Critical damping factor ζCRIT =1
location of (center) Quality factor Q = 1/2ζ
of two zeros f0
(V/W)½ /2π 1/2π(ESL× COUT)1/2 1/2π(kD/kI)1/2
291
Z(s) F(s) H(s)
R is the ESR of
1
constant R U kP Z(s) R Ls the cap here! L
Cs is its ESL
inv. prop. to freq. V kI V
1/C F(s) U Ws
s
prop. to freq. L W kD k
H(s) k P I k Ds
s
Other Definitions:
U C kP
damping factor δ R Critical damping factor δCRIT = 2
VW L k Ik D
Classic damping factor ζ = δ/2
crossover of pole-
V/2π 1/2πC kI/2π
at-origin, fp0 Critical classic damping factor ζCRIT= 1
20 U=2.46, R=2.46Ω
20log Znew( f ) Q=0.05 OVERDAMPED
20× log(2.46)=-7.8 dB ( f )
20log Zalt2
U=0.246, R=0.246Ω CRITICALLY DAMPED
20 dB
0
Q=0.5
20× log(0.246)=-12.2 dB U=0.0246, R=0.0246Ω
Q=5 UNDERDAMPED
20
20 dB
(“PEAKY”)
20× log(0.0246)=-32.2 dB
40 The peakiness changes as
3 4 5 6 7
10 100 1103 1104 1105 1106 1107
we change kP (or U or R)
f
Frequency (Hz)
292
Figure 5.31: The full analogy between a capacitor impedance and a PID compensator
In Figure 5.32, we present a summary of all the equations we need to implement Q-matching
using digital compensators. First, we simply set kI, based on the desired crossover, using the
well-known equation (with kI introduced)
k I VRAMP
fp0 f CROSS
2 VIN
Once we know kI, we can fix kD, by setting the two “coincident” zeros of the PID compensator
at the LC pole. So
1 1
f0 f LC
2 k D / k I 2 LC
Then for any given load, since we know the Q of the plant, if we are sensing the load, we can
appropriately set kP using Q-matching:
kIkD COUT
QCOMP QPLANT R LOAD
kP L
And that sums up our technique for fine-tuning PID coefficients, to achieve excellent transient
response, which was eventually validated on the bench too.
293
Figure 5.32: Full equation set for implementing Q-matching
On Sept 17, 2015, this technique was partially implemented by Sanjaya, on a vendor’s latest
generation digital controller (ZMDI, later IDT, then Renesas). Partially, because it was not
possible to continuously feed in a kP value based on load, given the architecture of the device.
However, we did set the kP to tune out the LC pole almost completely at a fixed half-maximum
(2.5A) load. A load transient from 0 to 2A and from 0 to 5A was carried out. In each case, the
part’s existing features were successively activated to see the improvement. Finally, all the
existing features were deactivated and only the PID coefficients based on our new method
described above were inputted. The overshoot/undershoot was almost a factor of 2 better than
their “state of the art” achieved so far over the years. Which bears out the Q-mismatch discovery
and our solution to it.
Some authors prefer to talk in terms of time constants. It is more difficult to get a feel for and
manipulate. Nevertheless, for completeness sake it is mentioned here.
294
ki
G(s) k p k ds
s
Often rewritten as
1
G(s) k p 1 s d
si
kp
So, k i , k d k p d
i
Modified by extra pole in last term
1 s d
G(s) k p 1
si 1 sd
N
So now we have an extra pole to kill the ESR-zero if we desire, or for lowering the phase margin
judiciously by placing it around fCROSS, as Lloyd Dixon had suggested.
The conversion tables to go back and forth between the PID coefficients and the pole-zero
locations are provided in Figure 5.33Errore. L'origine riferimento non è stata trovata.. Note
however that these equations (from Chris Basso’s APEC2012 seminar, but corrected a bit)
emulate a Type 3 compensator, As a result, placing the two zeros at the same location, as per
our usual strategy for canceling the LC pole, leads to a Q of the compensator of around 0.5
(critical damping). We have seen that is a major limitation of standard analog compensators, so
in fact this equation set fails to exploit the ability to shape the peakiness by varying k P, as in
digital compensators. To try and correct this, the X-factor was introduced. It bascically
multiplies the kP of the limited equation set by that factor.
Basically, if it is set to unity, we get all the equations in related literature. However, the X-factor
basically allows you to change the kP to a new kP, while keeping all the other PID coefficients, kI
and kD unchanged, thereby fixing the location of the coincident zeros. In this manner we can
just tweak the Q of the compensator, to match the Q of the plant, thereby ensuring Q-matching,
which is clearly the secret of our excellent bench performance.
295
p0 p0 p0
kp X factor
z1 p1 z2
kp
p1 z1 p1 z2 1 fp0
d 2 i
p1 z1 p1 z2 z1 z2 p1 X factor
d 4N 2 d i N 2 i 2 2N d i d 2 N i
fz1
z1 z2 1 1 4 d i 1 N
i
z1 z2 p1 Xfactor d 4N 2 d i N 2 i2 2N d i d 2 N i
fz1
4 d i 1 N
d
p1 z1 p1 z2
1
N
p1 z1 p1 z2 z1 z2 p1 X factor fp1
2 d
p1 d p12 1
N 1
X factor p1 z1 p1 z2 z1 z2 X factor
Figure 5.33: Alternative respresentation of PID coefficients with unique X-factor method
included
5.26 Conclusion
Based on the discussion so far, here is a summary of “Digital versus Analog” in control loop
implementations:
a) Analog compensators are very tricky to adjust. Digital compensators can be tweaked
more easily, and logically.
b) Analog compensators are very prone to component tolerances, temperature, standard-
value availability and so on. Digital compensators can be set precisely.
c) Analog compensation throws up one or two more poles than perhaps necessary. We
struggle to locate them correctly, without affecting the other poles and zeros, since they
are all so intertwined. Digital loops allow us to introduce additional poles or zeros
completely separately, and on demand.
d) Analog compensators cannot properly “kill” the LC double pole, because of the nature of
the zeros they provide (limited Q). Digital compensators, if used properly, can enable the
removal of conditional instability issues, leading to much lower output ringing during
load and line transients.
We have come a long way in understanding the link between analog and digital control loops,
as applied to switchers. At this stage it is perhaps unnecessary to confront the additional
sampling-related issues of digital control, as reflected in the z-plane etc. We do not think that
helps in either the understanding of control loops, or their implementation from a systems-
level design perspective. The Q-mismatch issue seems to have been recognized, and fixed,
leading to best results.
296
5.27 Appendix 1 – MathCad spreadsheet
297
COLLECTING TYPOS ON A-Z 2 N D
A few years ago, I was emphatically told by a very senior engineer at Cisco that "Sanjaya, you
never make a mistake". I remember I had gone there with dear Fernando, our Sales person, to
Microsemi. Indeed, there have been very few reported typos, leave aside errors in my books
over the years, which is why, unlike every other author out there, I haven't bothered to even
maintain an errata list. However, over the last few years, one of my most talented readers, and
an enthusiastic practitioner of the art, Nicola Rosano from Italy (he works at Thales Alenia
Space Center) has been unearthing quite a few, so I decided to start a page finally. I will add to
this list as we go along. But I thought I really should get started! This initial list is all from Nicola.
You will see that he not only picks out typos in each book, he can point out inconsistencies
between them! Whew! Help! My God, you're good Nicola. Thank you!
1. p. 160: Rac_s formula, the Ns value (factor '2') is missed in the numerator result.
2. p. 169: 'Pac_p' is 'Rac_p'
3. p. 184: Idiode of Buck Boost is : Id=Iin*(1-D)/D
4. p. 486-487: A ‘zero’ is missed in the open loop formula. It should be: 20 ∗ log(500𝟎)
5. p. 382: There is a misprint in the right side picture
IL2
IL1
6. p. 681: There is a misprint in the (d) point: It is: “at 150 kHz CISPR… Class B (QP) limit
is 66 dB μV.
298
7. p. 552: (bottom left). The result in [A/W] is 0.072A/W.
(check page 138 Chap 5: Switching Power Supply Design & Optimization - 2nd)
8. p. 552: (bottom right). It could be useful to clarify the result 0.081 [A/W] for which
conditions [Vac and f ] has been calculated.
9. p. 593. (middle column). The factor D@Vin_pk is missed in the formula but the
calculation is correct.
10. p. 712: Output voltage ripple (pp) component (capacitance related) for Buck Boost. The
reference interval time is D/fs not (1-D)/fs. It should be: deltaVout = Iout*D/(fs*Cout)
11. p. 716: Output voltage ripple (pp) component (capacitance related) for Flyback. The
reference interval time is D/fs not (1-D)/fs. It should be: deltaVout = Iout*D/(fs*Cout).
12. On page 103 value of Ipk (10A) is missed in the equation. However, the result 0.25 T is
correct
There were added typos found by Nicola Rosano and published in the comments. Unfortunately
the article has been cancelled by Linkedin and the comments lost as well.
299