-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
ticker.py: always use np.round? #27609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for opening your first PR into Matplotlib!
If you have not heard from us in a week or so, please leave a new comment below and that should bring it to our attention. Most of our reviewers are volunteers and sometimes things fall through the cracks.
You can also join us on gitter for real-time discussion.
For details on testing, writing docs, and our review process, please see the developer guide
We strive to be a welcoming and open project. Please follow our Code of Conduct.
Thanks, I think that this makes sense! As seen, we use Can you please add a test for this? I think a smoke test, where you basically plot something (simple) that currently triggers the error should be enough. With a comment that it is a smoke test and point to this PR for motivation. In this way we can make sure that the error doesn't return later if someone thinks "well, I can replace that np.round with round"... One can of course discuss if actually all round should be np.round, like now, or if one should read the code a bit more careful and see if any can stay. On the other hand, one can also question if round is actually that much better for any reason? It is not so obvious from the code why sometimes round is chosen and sometimes np.round (except when we know that there is a vector involved). |
Hey so I tried to reproduce the issue I experienced, with some |
In principle I agree this makes sense, but there might be some other changes needed (like You can reproduce this with import matplotlib.pyplot as plt
plt.ion()
fig, ax = plt.subplots()
import numpy as np
ax.loglog(np.logspace(0, 10, 10), np.logspace(0, 10, 10)**2)
ax.set_yticks([1, 10, np.inf]) which suggest that we should be doing better validation on the way into |
Wow thanks for creating this code example! I will put something similar in the tests I will hopefully write after discussing a solution.
Definitely - having |
Having |
OK I have an issue with running the tests locally, so I hope it will be OK with you that I will use the PR's CI to do that. Now I pushed a new test, without the fix yet, expecting to see the only test I added failing, and I will push a fix afterwards, expecting the issue to be fixed with it. |
OK tests failures from CI:
|
Now pushed the suggested fix. Hopefully CI will be green 🤞 . |
8188964
to
7f6af99
Compare
Aish there are more errors then I anticipated... What if I will replace all |
🎉 Managed to finally fix the new test I added (took me a while because of this). CI is green now. Your review is welcome. |
BTW @tacaswell CI is green here as well for a while now :). |
Can you try rebasing to see if we have a fix for the appevoyer failure on main? |
Does this only let |
I'm not sure I understood the 2nd part - what do you mean by "tick at inf"? If you are asking in general about the motivation for the PR, it is a bit hard for me to find the most accurate words to explain it, but for sure it fixes the test added in the 1st commit, and I believe it fixes many other potential issues with nan and infinite values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit hesitant to change all the math <-> np calls. This may have surprising side-effects. How much of this is to handle inf
in set_yticks
? I'd rather error there and keep other changes minimal.
lib/matplotlib/tests/test_ticker.py
Outdated
def test_yticks_with_inf(): | ||
fig, ax = plt.subplots() | ||
ax.loglog(np.logspace(0, 10, 10), np.logspace(0, 10, 10)**2) | ||
ax.set_yticks([1, 10, np.inf]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO we should raise if asked to put a tick at infinity.
That check should be in FixedLocator.__init__
.
In the original code that produced the issue I encountered and tried to fix here, I didn't explicitly used |
Are you able to produce a minimal example of the issue we are trying to fix here? Without that, we're basically blind:
The proposed math <-> np change, is substantial (in particular because the changes are subtle). We'd need a very informed decision to go that direction. One prerequisite is that we exactly know what problem we are fixing with that. |
I agree that the test I added here (suggested by @tacaswell in this comment) might not be very satisfying because it is a bit superficial, so I'm happy to report I did manage now to create a less superficial MWE :): import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([1,2], [1, 1e261])
ax.set_yscale("log")
plt.show() Gives #!/usr/bin/env python
import numpy as np
import math
for oom in range(300, 4400):
x = np.float128(10**oom)
print("oom(x) = ", oom, np.log(x), math.log(x)) The above prints inf in the last word after If I use
I agree overall with the development workflow you try to impose, but I find it a bit peculiar to justify the current state in which I can replace the test I added with the above MWE and verify that the PR fixes the test, but I want to know first what is the direction you'd like to take this now that I managed to obtain further clues as for the bug I experienced. |
Unfortunately, it's not that simple:
The next step is to follow the MWE code through a debugger and find out where the infinity comes in. There must be a place where 1e261 is turned into infinity, and most likely that's the code we have to fix. Edit: Part of the issue is that we add whitespace around the data (aka.
which still works, but will fail for 1e270. Or even better, don't bother with data but isolate the topics to the axis limits
|
1c60bf8
to
4f8dc92
Compare
Reported at matplotlib#27609 . Values < 1e300 that trigger this test failure might be found, but it should be a bit better not to be on the edge of this issue.
OK, that's sort of what I did in the beginning, although I didn't have such a good MWE. I also described that process in the very first comment of this PR. In the meantime I reordered the commits a bit, and added a test that currently fails, as I have not yet found an optimal solution for it. Your view on this now is most welcome. |
Reported at matplotlib#27609 . Values < 1e300 that trigger this test failure might be found, but it should be a bit better not to be on the edge of this issue.
4f8dc92
to
2357bef
Compare
Given the recent discussion I'm going to move this to the 3.11 milestone. |
The actual MWE is
and the relevant code: matplotlib/lib/matplotlib/ticker.py Lines 2463 to 2464 in 4468488
matplotlib/lib/matplotlib/ticker.py Line 2473 in 4468488
with numpy 10.0 ** 310 == inf (pure python throws an OverflowError instead). Note that the largest number you can represent with 64-bit floats is ~ 1.8e308. A tick locator should never place ticks at inf. The proper solution to this problem is to tune the tick location algorithm in |
That's true, and it'd indeed be nice to start with writing such a test specifically for diff --git i/lib/matplotlib/ticker.py w/lib/matplotlib/ticker.py
index 63a519f793..74cfac52e3 100644
--- i/lib/matplotlib/ticker.py
+++ w/lib/matplotlib/ticker.py
@@ -2401,7 +2401,7 @@ class LogLocator(Locator):
have_subs = len(subs) > 1 or (len(subs) == 1 and subs[0] != 1.0)
decades = np.arange(math.floor(log_vmin) - stride,
- math.ceil(log_vmax) + 2 * stride, stride)
+ math.ceil(log_vmax) + 2 * stride, stride, dtype=np.float128)
if have_subs:
if stride == 1: But then I encountered an issue in matplotlib/lib/matplotlib/ticker.py Line 1101 in 51e7230
Which makes diff --git i/lib/matplotlib/ticker.py w/lib/matplotlib/ticker.py
index 74cfac52e3..5f3222fc1f 100644
--- i/lib/matplotlib/ticker.py
+++ w/lib/matplotlib/ticker.py
@@ -1088,7 +1088,7 @@ class LogFormatterMathtext(LogFormatter):
b = self._base
# only label the decades
- fx = math.log(x) / math.log(b)
+ fx = np.log(x) / np.log(b)
is_x_decade = _is_close_to_int(fx)
exponent = round(fx) if is_x_decade else np.floor(fx)
coeff = round(b ** (fx - exponent)) And then the original This rabbit hole is what made me use numpy's functions all the time, as they seem a bit more accurate almost all of the times. If we'll after all still go in such approach, it'd should be a good idea to use float128 all the time. |
Rewriting for float128 would be a non-trivial medium-sized task. Most users don't use float 128 anyway and introducing it only to have larger ticklabels feels overkill - or at least a completely separate discussion with much larger impact. I'd rather try and manipulate the ticker so that it does not produce tick label positions beyond 1e308. The very naive approach would be to clip the nan position. One has to try whether that looks ok. Otherwise, you'd need to come up with a better decades algorithm for the edge case, e.g. choose something between vmax and 3e8 as the top tick position and a suitable spacing going down in regular decades. |
Apparently the maximal float size is actually import sys
import numpy as np
print(np.finfo(np.float64).max)
print(sys.float_info.max)
I think, that the natural thing to do would be to inherit the Indeed that's a non-trivial task though, so I'm not sure I'd be able to invest time in it. TBH, the issue I encountered here is not an active issue for me anymore. I just kept giving this attention because I want to contribute back to Matplotlib. I think that the 3 commits prior to the last one are worth merging either way, and we can create a separate issue with all the details we have aggregated in the last few days.
I think that silently ignoring such inputs can be confusing for the users, so I'd raise an error much earlier. |
AFAIK, we don't have any guarantees on that, and it would be a massive undertaking. Basically, you'd have to make sure that all internal calculations do respect this. I don't think we can or should go there in the foreseeable future.
I'm not ignoring inputs. The inputs are the data range. The TickLocator tries to invent ticks at nice positions. And results in current ticks in below image. The naive solution would be to clip the invented right-most tick. The axis limits would still cover the whole data. The proposed alternative dcades algorithm could try to find the solution for alternative ticks. |
PR summary
The error I experienced initially as described below, can be reproduced with:
The PR is currently a draft because I don't know how to best solve the issue. The last commit adds a test with some notes on the peculiarities of it. Ideas are welcome.
I still think that switching from
math
tonumpy
implementations of allround
,isclose
and mathematical functions should be a good idea, but I also understand the consideration of "if it ain't broken don't fix it" that recommends to keep the changes to a minimum.Original observation of the error:
I encountered the following error when using a logarithmic
yscale
in a graph:Where the above is printed repeatedly, for ever, and the GUI is stuck. With this patch applied, I get this graph in a logarithmic scale:
The orange line goes upwards to huge values - which proves that the fit I tried to perform was wrong, but still matplotlib shouldn't almost crash due to such scientific error. Without this patch, I managed to fix one instance of the above error, using the following code:
But then I encountered this error:
So I decided to discuss with you about the simple suggestion of replacing all of Python's builtin usages of
round
, withnumpy
's implementation, that can handleNaN
orInf
easily.It was very hard to debug it, as I couldn't use even a
try - except
clause to catch the error. I'm sorry, but I also currently haven't yet generated a reproducible example, due to the structure that my data is currently held at. I'd like to hear what you think first before I'll write tests to this PR.PR checklist