Skip to content

Improve rendering when using multiple / large labels #21958

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion lib/matplotlib/tests/test_text.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import numpy as np
from numpy.testing import assert_almost_equal
import pytest

import time
import matplotlib as mpl
from matplotlib.backend_bases import MouseEvent
from matplotlib.font_manager import FontProperties
Expand Down Expand Up @@ -756,3 +756,18 @@ def test_pdf_chars_beyond_bmp():
plt.rcParams['mathtext.fontset'] = 'stixsans'
plt.figure()
plt.figtext(0.1, 0.5, "Mass $m$ \U00010308", size=30)


def test_cache_large_labels():
"""Test to verify cache helps when ticks are too large"""
times = []
fig, _ = plt.subplots()
for pow in range(1, 5):
labels = [i for i in range(10**pow)]
t0 = time.perf_counter()
plt.xticks(labels)
plt.yticks(labels)
fig.draw_without_rendering()
times.append(time.perf_counter()-t0)
assert times[-1] > times[0]
assert times[-1] > times[-2]
3 changes: 2 additions & 1 deletion lib/matplotlib/text.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,6 @@ class Text(Artist):
"""Handle storing and drawing of text in window or data coordinates."""

zorder = 3
_cached = cbook.maxdict(50)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this dict really a memory hog if it gets larger? Why not just set to a large number and move on? Just because we can make a double caching structure doesn't mean we should. Given that we have had very few complaints of this nature, my assumption is that most people are not hitting the 50-element limit, so bumping it to 5000 for the few people who need it doesn't seem like a terrible idea.

Note that this largely catches users who have accidentally used categoricals and created thousands of ticks. I think, if anything, we should add logic in the categorical ticker that raises a warning of more than 100 ticks are being asked for.


def __repr__(self):
return "Text(%s, %s, %s)" % (self._x, self._y, repr(self._text))
Expand Down Expand Up @@ -158,6 +157,7 @@ def __init__(self,
self._linespacing = linespacing
self.set_rotation_mode(rotation_mode)
self.update(kwargs)
self._cached = dict()

def update(self, kwargs):
# docstring inherited
Expand All @@ -177,6 +177,7 @@ def __getstate__(self):
d = super().__getstate__()
# remove the cached _renderer (if it exists)
d['_renderer'] = None
d['_cached'] = {}
return d

def contains(self, mouseevent):
Expand Down