-
Notifications
You must be signed in to change notification settings - Fork 438
Removed epsilon perturbation value in solve_passivity_LMI. Fix associated unit test. #814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…t test to reflect scaling values from empirical testing.
This is dangerous. There is a zoo of lower precision hardware systems out there where python-control might be used and packaged. For example, I frequently encounter errors for such tests on the openSUSE build system with i586, aarch64, 32-bit arm, s390x, PowerPC and so on platforms. |
I think we'd either have to delete this test then; or do something like detect the precision of the architecture somehow, then back out the values of the matrices such that the condition number is bounded by like 10^(n/2), where n is the number of bits allocated to representing floating points numbers. (Just spit balling). |
I will delete the test. |
@@ -14,7 +14,6 @@ | |||
except ImportError: | |||
cvx = None | |||
|
|||
eps = np.nextafter(0, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(This is a by-the-way comment, feel free to ignore.)
This eps
was very small; I think a more typical "eps" would be "the smallest number eps such that 1+eps > 1", or a multiple or maybe square root of that. np.finfo
is useful to find these sorts of value.
In [402]: np.nextafter(0, 1)
Out[402]: 5e-324
In [403]: np.finfo(float).eps
Out[403]: 2.220446049250313e-16
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a pro tip, and what I was aiming for.
Co-authored-by: Ben Greiner <code@bnavigator.de>
This should hopefully address #761
There was a unit test that was supposed to "hint at" how much you could scale matrices and expect to still correct results. It was marked as xfail after it was flakey on githubs integration pipeline. Hopefully this change removes the flakeyness.
This change also pushes more responsibility on the user for knowing if their system is "well conditioned". I suspect the root cause of the flakeyness is that CVXOPT does matrix inversions while solving the passivity LMI, and the algorithm would divide by zero if the condition numbers of the system matrices were too high. So, I added a exception handling case to catch the zero division during the cvxopt solve, said we think its because the matrices are ill conditioned, and then reraised the exception.
I then empirically bumped down the relative scaling on the unit test until it started passing on my local system.