Skip to content

control.pade has possible numerical issues #74

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jason-s opened this issue Dec 16, 2015 · 5 comments
Closed

control.pade has possible numerical issues #74

jason-s opened this issue Dec 16, 2015 · 5 comments

Comments

@jason-s
Copy link

jason-s commented Dec 16, 2015

There is an incorrect "optimization" in control.pade (see delay.py line 80-84):

    c = 1.
    for k in range(1, n+1):
        c = T * c * (n - k + 1)/(2 * n - k + 1)/k
        num[n - k] = c * (-1)**k
        den[n - k] = c

The problem is the inclusion of the term T in the recursion. I pulled out my copy of Golub + Van Loan p. 574 and they keep the coefficient c separate from powers of T (in Golub + Van Loan it is based on matrices, but it's the same math) until point of use. I would change to

    c = 1.
    Tpow = 1.
    sgn = 1
    for k in range(1, n+1):
        c *= (n - k + 1)/(2 * n - k + 1)/k
        Tpow *= T
        sgn = -sgn
        num[n - k] = sgn * c * Tpow
        den[n - k] = c * Tpow

Also the use of normalization at the end (rather than somehow at the beginning) seems a bit sketchy, especially if den[0] is large or small:

    num = [coeff/den[0] for coeff in num]
    den = [coeff/den[0] for coeff in den]
@roryyorke
Copy link
Contributor

Do you have values of T and n demonstrating a problem?

The code at the end of this comment runs without raising an assertion;
that suggests whether c and T are kept separate or not results in only
1 ulp of difference, i.e., not much.

I've tried to test whether scaling (normalizing?) at the end by den[0]
makes a difference; I did this by finding the exact Pade approximant
coefficients with fractions.Fraction(), and comparing against the
floating-point results. I didn't do this terribly scientifically, but
for a variety of T and n the ulp difference is 1 up to order 11.

The absolute error is not terribly good; for instance, for T=1e-3
and n=5, the maximum absolute error over all coefficients is 2.7;
increasing n to 6 results in an error of 31e3; and for n=7 it's 2.3e9
(!). In the latter case the constant coefficient is around 2e28, so
the large absolute error is reasonable.

(The code for all these claims is not terribly polished, but if anyone
wants it let me know.)

In short, I don't think keeping c and T separate will change much;
and, assuming one requires that the coefficient of the highest power
of s is 1, I don't think changing the normalization will change much
either. We seem to be getting pretty close to the best floating-point
answer we can.

I'm not even close to being a numerical analyst, but my perception is
that addition is where one particularly accumulates error; the
operations involving c and T are all multiplication. Golub and van Loan
are dealing with matrix multiplication, which results in addition;
perhaps that's why they separated c and T?

import numpy as np

def orig(T,n):
    num = [0.]*(n+1)
    den = [0.]*(n+1)
    num[-1] = 1.
    den[-1] = 1.
    c = 1.
    for k in range(1, n+1):
        c = T * c * (n - k + 1)/(2 * n - k + 1)/k
        num[n - k] = c * (-1)**k
        den[n - k] = c
    return num, den

def changed(T,n):
    num = [0.]*(n+1)
    den = [0.]*(n+1)
    num[-1] = 1.
    den[-1] = 1.
    c = 1.
    Tpow = 1.
    for k in range(1, n+1):
        c = c * (n - k + 1)/(2 * n - k + 1)/k
        Tpow *= T
        num[n - k] = c * Tpow * (-1)**k
        den[n - k] = c * Tpow
    return num, den

T = np.sqrt(2)

no,do = orig(T,5)
nc,dc = changed(T,5)

npa = np.array

Ts = [1, np.sqrt(2), np.sqrt(0.5), 1e-2, 1e-6, 1e6]

for T in Ts:
    np.testing.assert_array_almost_equal_nulp(npa(no),npa(nc),nulp=1)
    np.testing.assert_array_almost_equal_nulp(npa(do),npa(dc),nulp=1)

@jason-s
Copy link
Author

jason-s commented Jan 4, 2016

OK, I think I understand your point (as long as you have only multiplications and no additions, there's no problem in roundoff error with very large or small floating-point exponents) and it sounds reasonable to me.

@roryyorke
Copy link
Contributor

I think we can probably close this?

@jason-s
Copy link
Author

jason-s commented Mar 17, 2017

I guess so. I have no time to investigate this again in the near future (next 12 months) unless, of course, it becomes an urgent requirement for my day-to-day work.

@roryyorke
Copy link
Contributor

Oh, I thought you agreed (see #74 (comment) ) that there probably wasn't a numerical issue---besides the limits inherent in using a TF representation.

I think the approach you proposed in #75 for state-space representation would be more accurate, or even as accurate as is possible, but it's not applicable to TF representation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants