-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Improve rendering when using multiple / large labels #21958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This seems reasonable. Unfortunately, the figure doesn't survive a pickle - i.e. with the new cache, the objects are not serializable. I expect the figure used to have something that detached the cache, and you could likely do the same thing here? But I'm not an expert on (nor a fan of) pickling figures ;-) |
Thanks for the brief explanation @jklymak. |
Many tests fail locally for me as well. If you are pretty sure it's not due to your pr it's perfectly fine to push tothe more controlled test environment. |
Thanks for the comments @tacaswell, I hope I addressed your comments with the recent changes. |
I wonder(?) if it may make sense to keep the old global cache layer as well, so that if you're e.g. drawing 10 axes each with the same tick labels, the labels on the latter axes still benefit from the work done on the first axes. Of course, that would require a bit more rearchitecting of the caching... |
@@ -107,7 +107,6 @@ class Text(Artist): | |||
"""Handle storing and drawing of text in window or data coordinates.""" | |||
|
|||
zorder = 3 | |||
_cached = cbook.maxdict(50) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this dict really a memory hog if it gets larger? Why not just set to a large number and move on? Just because we can make a double caching structure doesn't mean we should. Given that we have had very few complaints of this nature, my assumption is that most people are not hitting the 50-element limit, so bumping it to 5000 for the few people who need it doesn't seem like a terrible idea.
Note that this largely catches users who have accidentally used categoricals and created thousands of ticks. I think, if anything, we should add logic in the categorical ticker that raises a warning of more than 100 ticks are being asked for.
No, because then there was #22323 which further got rid of the explicit |
Moved to 3.8 as the rebase is non trivial and there have been other changes to this caching layer so we need to evaluate if we still need this. |
PR Summary
Fixes #21895
The example reported in #21895 initially runs in 10.079 s (locally, my old laptop). By moving cache in
lib/matplotlib/text.py
as an attribute ofText
, the example runs in 4.426 s, which translates to an improvement of ~78%. Added a unit test that uses labels of different lengths, increasing as10**i
, and assert times are shorter. Not quite as the suggestion by @tacaswell because I am not comparing text vs no text. I'm not fully convinced of this approach either, but can't think of anything better. I appreciate suggestions to make the test more solid.No changes in
matplotlib/lib/matplotlib/mathtext.py
.As a separate note, this also probably needs a benchmark in mpl_benchmarks as suggested by @jklymak on Gitter (thanks Jody!).
PR Checklist
Tests and Styling
pytest
passes).flake8-docstrings
and runflake8 --docstring-convention=all
).Documentation
doc/users/next_whats_new/
(follow instructions in README.rst there).doc/api/next_api_changes/
(follow instructions in README.rst there).