-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Add a check in check_figures_equal that the test did not accidentally plot on non-fixture figures #18581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Alternatively, and maybe simpler: Check that the reference figure is not empty. |
"Empty" is tricky to check, and actually I think there are check_figures_equal tests where I check that some commands are equivalent to not drawing anything :) |
Also related, I have been thinking about ways to eliminate These thoughts are only barely formed, but externalizing in case anyone has similar ideas / wants to push on them... |
|
I think so, thanks for the PR. |
Problem
check_figures_equal
relies on the test plotting ontofig_test
andfig_ref
, not elsewhere (e.g. on newly instantiated figures, as noticed in #18579).Proposed Solution
It should be reasonable to also fail the test if the test function creates new pyplot figures to catch errors as in #18579.
Additional context and prior art
Instead, we could also change
check_figures_equal
to just not pass the two figures in and always expect the test function to create exactly two pyplot figures itself and check these, but then it's a bit arbitrary which figure is the reference and which is the test (although that only really (barely) matters for display purposes in triage_tests.py), and feels a bit dirty (even ignoring the API break, which is as always manageable)...The text was updated successfully, but these errors were encountered: