Skip to content

stability_margins doesn't find gain=1 crossover frequency correctly for DT systems #523

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sawyerbfuller opened this issue Jan 28, 2021 · 15 comments · Fixed by #525 or #566
Closed
Assignees

Comments

@sawyerbfuller
Copy link
Contributor

sawyerbfuller commented Jan 28, 2021

stability_margins doesn't find gain=1 crossover frequency correctly for DT systems, but it seems to mostly do so for CT systems. I think the probelm may be in how margins.poly_z_mag1_crossing is finding roots, but there were a few steps in that function I couldn't follow, e.g. why H(z)*H(1/z)=1 implies |H(z)| = 1.

Example:

import control as ct
omegan = 2*np.pi
zeta = 0.2
plant = 1.1*ct.tf(omegan**2, [1, 2*zeta*omegan, omegan**2])
Ts = .05
plantdisc = ct.c2d(plant,Ts,'zoh')
#_ = ct.bode_plot(plant, margins=True) # gives correct gain=1 crossover freq
_ = ct.bode_plot(plantdisc, margins=True) # no gain=1/no phase margin given

image

As an example of what it should look like, here is Matlab:

omegan = 2*pi;
zeta = 0.2;
plant = 1.1*tf(omegan^2, [1, 2*zeta*omegan, omegan^2])
Ts = .05;
plantdisc = c2d(plant,Ts,'zoh')
margin(plantdisc)

image

bnavigator added a commit to bnavigator/python-control that referenced this issue Jan 30, 2021
@bnavigator
Copy link
Contributor

why H(z)*H(1/z)=1 implies |H(z)| = 1

If I am not mistaken, it follows from |H(z)|² = H(z) * conj(H(z)) = 1. Octave uses the same formula

But the algorithm had a bug. Thanks for noticing and reporting.

@sawyerbfuller
Copy link
Contributor Author

I see, Makes sense if we are restricting the root search to the unit circle, where 1/z = conj(z).

And it has been a little while but I guess it must be that h(conj(z)) = conj(h(z))

bnavigator added a commit that referenced this issue Jan 30, 2021
fix #523: finding z for |H(z)|=1 computed the wrong polynomials
@sawyerbfuller
Copy link
Contributor Author

Reopening I found another example that seems to confuse the current version: a 4th order system consisting of two slightly-damped resonances in series:

import control as ct
omegan = 1
zeta = 0.5
resonance = ct.tf(omegan**2, [1, 2*zeta*omegan, omegan**2])
omegan2 = 100
resonance2 = ct.tf(omegan2**2, [1, 2*zeta*omegan2, omegan2**2])
sys = 5 * resonance * resonance2
#100*ct.tf(1,[1, 0]) * ct.tf([1, 1], [1, 100]) * resonance
sysd = ct.c2d(sys, .001)

This seems to give a reasonable result for the continuous-time case:
ct.bode_plot(sys, margins=True);
image

but not for the discrete-time case:
ct.bode_plot(sysd, margins=True);
image

If I lower the damping ratio to zeta=0.05 it seems to be manage:
image

@sawyerbfuller sawyerbfuller reopened this Mar 4, 2021
@sawyerbfuller
Copy link
Contributor Author

@bnagivator any ideas? I tried relaxing the tolerance but that didn't seem to help.

@bnavigator
Copy link
Contributor

Numerical issues. The coefficients of the numerator are significantly smaller than those of the denominator.

def _poly_z_mag1_crossing(num, den, num_inv_zp, den_inv_zq, p_q, dt, epsw):
# |H(z)| = 1, H(z)*H(1/z)=1, num(z)*num(1/z) == den(z)*den(1/z)
p1 = np.polymul(num, num_inv_zp)
p2 = np.polymul(den, den_inv_zq)
if p_q < 0:
# * z**(-p_q)
x = [1] + [0] * (-p_q)
p1 = np.polymul(p1, x)
z = np.roots(np.polysub(p1, p2))
eps = np.finfo(float).eps**(1 / len(p2))
z, w = _z_filter(z, dt, eps)
z = z[w > epsw]
w = w[w > epsw]
return z, w

IPdb [1]: num
array([2.04126716e-09, 2.19977183e-08, 2.15576756e-08, 1.92123173e-09])

IPdb [2]: den
array([ 1.        , -3.89432859,  5.69259981, -3.70220425,  0.90393303])

IPdb [3]: p1
array([3.92174723e-18, 8.62676896e-17, 5.60540187e-16, 9.56490893e-16,
       5.60540187e-16, 8.62676896e-17, 3.92174723e-18, 0.00000000e+00])

IPdb [4]: p2
array([  0.90393303,  -7.2224165 ,  25.25592868, -50.48489472,
        63.09489902, -50.48489472,  25.25592868,  -7.2224165 ,
         0.90393303])

IPdb [5]: np.polysub(p1, p2)
array([ -0.90393303,   7.2224165 , -25.25592868,  50.48489472,
       -63.09489902,  50.48489472, -25.25592868,   7.2224165 ,
        -0.90393303])

IPdb [6]: np.roots(np.polysub(p1, p2))
array([1.04733131+0.09092897j, 1.04733131-0.09092897j,
       0.94766452+0.08227594j, 0.94766452-0.08227594j,
       1.00290822+0.j        , 0.99999917+0.00307635j,
       0.99999917-0.00307635j, 0.99709245+0.j        ])

IPdb [7]: z
array([0.99999917+0.00307635j])

IPdb [8]: w
array([3.07634228])

The result of num(z)*num(1/z) - den(z)*den(1/z) is barely affected by the numerator, which gives an inaccurate result for the root. 😒

My machine gives me a crossing frequency of 3.08 rad/s for zeta=0.5 too:

Figure_1
zoomed in:
Figure_2

@sawyerbfuller
Copy link
Contributor Author

I see. Looks like this may happen any time the nyquist frequency is maybe 3+ orders of magnitude faster than the slowest pole. What do you think about a stopgap solution to detect this by looking at the relative order of magnitudes of numerator and denominator polynomial and either give a warning or revert to the interpolating numerical solution via the FrequencyResponseData class?

@bnavigator
Copy link
Contributor

I was thinking to use the inaccurate but not completely wrong found crossing frequency as a starting point for a local numerical solve. That way we don't have to evaluate the whole frequency range again.

@sawyerbfuller
Copy link
Contributor Author

Unfortunately I have another example where it is nowhere close (they invovle solutions to a class I am teaching so I will have to do some obfuscation before I can post them : )

@sawyerbfuller
Copy link
Contributor Author

Using an interpolating FrequencyResponseData system as an intermediary produces pretty good results (see below). Which we could use as a starting point to get a better local numerical result using an optimizer as you suggested. What if I put in the FRD version as a stopgap (which will include some sort of a test for small numerator)? Yeah, it's slow. But it might serve as a starting point to do a better numerical solve?

omega = ct.freqplot._default_frequency_range(sysd)
sysdfrd = ct.FRD(sysd(np.exp(1j*sysd.dt*omega)), omega, smooth=True)
ct.bode_plot(sysdfrd, omega=sysfrd.omega, margins=True); 

image

@murrayrm
Copy link
Member

On option might be to include a method keyword someplace that allows you to tune the behavior of the calculation. So if you specify method='FRD' then you get the behavior above. This could be a way to keep from forcing every usage to be some slow method.

@sawyerbfuller
Copy link
Contributor Author

(am hoping to have something working in place for 0.9 for the discrete-time controls class I'm teaching starting March 29. The students will appreciate a working DT margin calculation! : )

@murrayrm
Copy link
Member

If we can get this sorted out before ~22 March, we can do a 0.9.0 release that week to make it easy for students to install.

@sawyerbfuller
Copy link
Contributor Author

@murrayrm sure, a method keyword would make sense. What if I let that override things, but by default revert to the FRD method if it looks like the "old" method isn't going to give good numerical results?

@murrayrm
Copy link
Member

Seems fine by me to have the default be something that works well.

You might consider setting up a benchmark file (in benchmarks/) that lets you see how much of a performance hit there is on some different examples.

@bnavigator
Copy link
Contributor

In any case there should be a check, whether the current results are accurate enough and give a warning if not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants