Skip to content

Fixed SVG-as-text image comparison tests. #23350

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 9, 2022

Conversation

tfpf
Copy link
Contributor

@tfpf tfpf commented Jun 26, 2022

SVG-as-text tests make make the figure background transparent, but its colour (i.e. under the transparent mask) is actually black. Before the expected and actual images are compared, they are converted from RGBA to RGB, which makes them both completely black (since the text is also black). As a result, the test passes even if the expected and actual images differ. Making the text white solves this problem.

PR Summary

Originally noted in #22852 (comment). If $-$- in svgastext_math_tests is changed to something else, the test still passes, because the colour of the transparent figure patch is actually black.

Ideally, the colour of the patch should be changed from black to white (instead of setting the colour of the text to white and changing the two images), but even after doing:

  • mpl.rcParams['figure.facecolor'] = 'white'
  • mpl.rcParams['savefig.facecolor'] = 'white'
  • fig.patch.set(visible=False, facecolor='white')

the patch remained black. (Is there any other way?)

PR Checklist

Tests and Styling

  • Has pytest style unit tests (and pytest passes).
  • Is Flake 8 compliant (install flake8-docstrings and run flake8 --docstring-convention=all).

Documentation

  • [N/A] New features are documented, with examples if plot related.
  • [N/A] New features have an entry in doc/users/next_whats_new/ (follow instructions in README.rst there).
  • [N/A] API changes documented in doc/api/next_api_changes/ (follow instructions in README.rst there).
  • [N/A] Documentation is sphinx and numpydoc compliant (the docs should build without error).

@anntzer
Copy link
Contributor

anntzer commented Jun 26, 2022

I can't repro the problem locally: the background gets converted to white, not to black, in my case.
In any case, it would seem that a better fix would be for the testing machinery to not ignore the alpha channel if it is present, no? Does something like the following work for you?

diff --git i/lib/matplotlib/testing/compare.py w/lib/matplotlib/testing/compare.py
index 665e055f12..1119cf10f8 100644
--- i/lib/matplotlib/testing/compare.py
+++ w/lib/matplotlib/testing/compare.py
@@ -371,6 +371,11 @@ def calculate_rms(expected_image, actual_image):
 # 16-bit depth, as Pillow converts these to RGB incorrectly.
 
 
+def _load_image(path):
+    img = Image.open(path)
+    return np.asarray(img if img.mode == "RGBA" else img.convert("RGB"))
+
+
 def compare_images(expected, actual, tol, in_decorator=False):
     """
     Compare two "image" files checking differences within a tolerance.
@@ -435,9 +440,9 @@ def compare_images(expected, actual, tol, in_decorator=False):
         actual = convert(actual, cache=True)
         expected = convert(expected, cache=True)
 
-    # open the image files and remove the alpha channel (if it exists)
-    expected_image = np.asarray(Image.open(expected).convert("RGB"))
-    actual_image = np.asarray(Image.open(actual).convert("RGB"))
+    # open the image files
+    expected_image = _load_image(expected)
+    actual_image = _load_image(actual)
 
     actual_image, expected_image = crop_to_same(
         actual, actual_image, expected, expected_image)
@@ -486,9 +491,8 @@ def save_diff_image(expected, actual, output):
     output : str
         File path to save difference image to.
     """
-    # Drop alpha channels, similarly to compare_images.
-    expected_image = np.asarray(Image.open(expected).convert("RGB"))
-    actual_image = np.asarray(Image.open(actual).convert("RGB"))
+    expected_image = _load_image(expected)
+    actual_image = _load_image(actual)
     actual_image, expected_image = crop_to_same(
         actual, actual_image, expected, expected_image)
     expected_image = np.array(expected_image).astype(float)

@tfpf
Copy link
Contributor Author

tfpf commented Jun 27, 2022

That's odd. It's noteworthy that in #22852, the latest commit (22faffc) contains two wrong images (one each for Computer Modern and DejaVu Sans), and the tests still passed on CI!

I'll try with these suggested changes, but perhaps it is more important to reproduce the problem first?

Just checked. These changes work; the tests fail if the text is changed. Probably unrelated, but this made me think: since we know we are loading PNG images, if it is guaranteed that they will have a transparency layer, we can avoid checking for it and just compare the images directly. That way, if savefig stops adding the transparency, we'd know immediately. (Unless there already is a test for that?)

@tfpf tfpf marked this pull request as draft June 28, 2022 08:32
@tfpf
Copy link
Contributor Author

tfpf commented Jun 28, 2022

After d23db61, all checks (other than PR cleanliness) have passed, which can be considered a reproduction of the issue?

Was there a good reason the transparency was not being considered?

@anntzer I'm not sure if I should be the one to make your suggested changes to this PR, since it was your idea and effort!:sweat_smile:

@anntzer
Copy link
Contributor

anntzer commented Jun 28, 2022

It's OK, I donate the patch above to you if it works, feel free to give it a try. (Sorry, I may not have the time to investigate the repro for now, although I agree it would be nice to figure out why it only occurs sometimes.)
My guess is that back when the test machinery was introduced, transparency support was more brittle and therefore skipped? Haven't checked the git history, though.

@tfpf tfpf force-pushed the svgastext-nofail-fix branch from d23db61 to bb2d5f2 Compare June 29, 2022 05:31
@anntzer
Copy link
Contributor

anntzer commented Jun 29, 2022

The CI failures are legitimate. Looks like you need to add an out so that a fully opaque (alpha==255) RGBA image can compare equal to a RGB image?

@tfpf
Copy link
Contributor Author

tfpf commented Jun 29, 2022

Understood; I'll take a look at the failures and check.

@tfpf
Copy link
Contributor Author

tfpf commented Jun 30, 2022

I updated the function like this.

def _load_image(path):
    img = Image.open(path)
    if img.mode == "RGBA" and img.getextrema()[3][0] < 255:
        return np.asarray(img)
    return np.asarray(img.convert("RGB"))

One test (test_bbox_inches_tight_suptile_legend[svg]) fails with the error: Image sizes do not match expected size: (569, 885, 4) actual size (569, 885, 3). This is a genuine error—not due to these changes, but one which was already present. It was not detected earlier because the alpha channel was ignored. Here's what their alpha channels look like.

wrong_alpha_in_exp_image

The actual image is completely opaque; the expected image isn't—its rightmost column has transparency (note the value reported at the mouse position).

@tfpf
Copy link
Contributor Author

tfpf commented Jul 2, 2022

This transparency problem is a lot more complicated than I thought! On the current main branch, this is what I get when I run test_bbox_inches_tight_suptile_legend[svg].

  • bbox_inches_tight_suptile_legend-expected.svg has a transparent strip at its right edge.
  • bbox_inches_tight_suptile_legend.svg also has a transparent strip at its right edge, but it is thinner. Update: this might be an artefact of my image viewer, so this point and the the two questions below are moot.
  • bbox_inches_tight_suptile_legend_svg.png does not have the transparent strip.

Questions:

  • Why does the actual SVG have transparency? As far as I can see, the test doesn't do anything to make that happen.
  • Why does the actual SVG-converted-to-PNG have no transparency?

(Small update: only Linux tests failed, so might it be something to do with Inkscape? I encountered the transparency problem on Linux Mint 20.2 and Manjaro Linux.)

@tacaswell A visual inspection of the history of this image shows that that latest commit f15c26b might have introduced the transparent strip. Could you throw some light here?:sweat_smile:

@tfpf tfpf force-pushed the svgastext-nofail-fix branch from 5d127c4 to 41b671b Compare July 7, 2022 07:47
@tfpf
Copy link
Contributor Author

tfpf commented Jul 7, 2022

After manually converting the actual and expected SVG images (using Inkscape) to PNG, I believe that the original image should not have a transparent strip.

@tfpf tfpf marked this pull request as ready for review July 7, 2022 12:26
SVG-as-text tests make make the figure background transparent, but its colour (i.e. under the transparent mask) is actually black. Before the expected and actual images are compared, they are converted from RGBA to RGB, which makes them both completely black (since the text is also black). As a result, the test passes even if the expected and actual images differ. Not ignoring the transparency layer fixes this problem: discard the transparency layer (if any) only if the image is opaque.
@tfpf tfpf force-pushed the svgastext-nofail-fix branch from 41b671b to 855332a Compare July 8, 2022 06:06
@QuLogic QuLogic added this to the v3.6.0 milestone Jul 8, 2022
@tacaswell
Copy link
Member

My very vague memory (refreshed from looking at #9817) is that after changing from ints -> floats in the svg without any other changes. If it affected anything with transparency that was unintentional.

@tacaswell
Copy link
Member

My guess is that the source of the strip is related to rounding / snapping on the clip box.

It is oddly re-assuring how much our SVGs have changed while still producing the same visual result!

@tacaswell tacaswell merged commit f06c2c3 into matplotlib:main Jul 9, 2022
@tacaswell
Copy link
Member

Thank you @tfpf !

@tfpf tfpf deleted the svgastext-nofail-fix branch July 10, 2022 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants