-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
The example of the testing decorator does not work. #18168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You need to update Matplotlib, or downgrade pytest. |
@QuLogic |
I still have an issue with testing. Tests work fine when I run them one by one, but when I run Should I implicitly close all the opened figures in the beginning of the test with from matplotlib.testing.decorators import image_comparison
from matplotlib import pyplot as plt
@image_comparison(baseline_images=['line_dashes1'], remove_text=True, extensions=['png'])
def test_line_dashes1():
fig, ax = plt.subplots()
ax.plot(range(10), linestyle=(0, (3, 3)), lw=5)
@image_comparison(baseline_images=['line_dashes2'], remove_text=True, extensions=['png'])
def test_line_dashes2():
fig, ax = plt.subplots()
ax.plot(range(10), linestyle=(0, (3, 3)), lw=5) Output of pytest on this file:
|
@SmirnGreg perhaps discuss this at discourse.matplotlib.org ? |
Dear @jklymak I can discourse that on stack overflow or on some other forum as well, but I believe there is a severe issue in the image testing within matplotlib that might result into the library code change. Let me explain my point. Matplotlib is the most heavily used plotting library in the Python community. In science and data science, it is definitely the most used. Groups of scientists rely on matplotlib, and since matplotlib turned to version 3 and dropped the python2 legacy, the community may demand stability. Current industry standard, as well as for serious scientific development, includes writing tests for the code, especially when using the rapidly changing frameworks. Matplotlib is trying to provide some testing environment, such as decorator Currently, the documentation for
But it is not comparing the images generated by test, rather all open figures. As far as I can see (did not go to the source code yet), the decorator tries to compare all currently opened figures with the saved one. At the same time, the previous test leaves the figure open. I can propose three solutions for that:
I am ready to implement one of the above if this PR is likely to be accepted, maybe with the support from the core team if possible. Best wishes, |
I suggest discourse because I think you need help setting up your testing environment, and this is not the appropriate venue for that. |
The way the test the work they have to collect all of the currently open figures (one could argue we should be returning the tuple of the figures we want tested but that is neither here nor there). The way we cope with this in our test suite (which uses these tools) is via matplotlib/lib/matplotlib/testing/conftest.py Lines 32 to 101 in d38e443
which is auto-used for all of the tests and takes care of making sure any global state that is touched is cleaned up and (implicitly) closing all of the figures via I think the fix here is to document importing that fixture into your conftest.py |
Dear @tacaswell, thanks a lot, that is what I was looking for! It seems to me that when testing not the matplotlib itself, but own packages, the following import makes everything fine (as the fixture is applied to every test case): from matplotlib.testing.conftest import mpl_test_settings No further setup is required with latest pip releases of pytest and matplotlib. |
@SmirnGreg Where would you have expected to find that information (and can you open a PR putting it there?) ? |
I'm wondering if our testing functions are intended to be public for the end user. How does our internal testing compare to pytest-mpl? A pytest plugin seems to be the cleaner approach compared to importing from conftest. |
I would 100% say everyone else should use |
@timhoffm @dopplershift
Based on the comment by @tacaswell, I would propose one or more of the following:
What would you say? |
I'd go with the three documentation bullets. |
@SmirnGreg just to be clear, adding from matplotlib.testing.conftest import mpl_test_settings fixes the problem with the tests failing? If so, I'll go ahead and add in this line to the example in the docs as well, in the places that you suggested.
@timhoffm Would that be alright? |
Um, not the pytest expert here, but I think it's sufficient to have |
This won't work as pytest needs to find the fixture before it finds the test. Might make sense to add a
Yes, pytest can find fixtures either in a conftest.py or at the module level. If you want it everywhere in a package, put it in conftest.py, if you only want it in a single test module import it at the module level. Either will work, depends on how aggressively you want to reset Matplotlib's global state. For better-or-worse I think users have discovered our testing module and are using it so we should document it, however I would not add it to the example as the example is about how to write tests for our test suite. I do think that pytest-mpl is a better option (and I agree with @dopplershift that if we had effort we would migrate to depending on them). |
Bug report
Bug summary
The example of the testing decorator does not work.
https://matplotlib.org/3.2.1/devel/testing.html#writing-an-image-comparison-test
Code for reproduction
Actual outcome
Expected outcome
Matplotlib version
print(matplotlib.get_backend())
): not applicablepip install <package that depends on matplotlib and has it in requirements>
within the new Conda environment.I want to use something to test whether the figures generated by my code look the same/similar comparing to the figures generated earlier.
The text was updated successfully, but these errors were encountered: