Skip to content

ENH: anti-alias down-sampled images #13724

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

jklymak
Copy link
Member

@jklymak jklymak commented Mar 21, 2019

PR Summary

Closes #5602

See new example documenting the change: https://23268-1385122-gh.circle-artifacts.com/0/home/circleci/project/doc/build/html/gallery/images_contours_and_fields/image_antialiasing.html

UPDATED: 16 April 2019

The API is now

ax.imshow(A, interpolation='antialiased')

with 'antialiased' as the default, and 'nearest' giving the old behaviour. This API was discussed on the early-April development call, and strongly suggested by @efiring .

Q: a) should this be the default? and b) should the 12 or so failing tests be changed to non-default so the images don't have to change?

OLD: UPDATED:

Below is the logic for turning on hanning-window smoothing:

  • only do it if the interpolation is already "nearest" The other interpolation schemes will
    anti-alias
  • only do if up-sampling less than a factor of three (and for all down-sampling) so long as the up-sample isn't an integer.
                interpolation = self.get_interpolation()
                if interpolation == 'nearest' and self.get_antialiased():
                    # do antialiasing....
                    # compare the number of displayed pixels to the number of
                    # the data pixels.
                    samplerate = t.transform([A.shape[1], 0])[0] / A.shape[1]
                    if samplerate <= 3:
                        # don't auto-alias if a sample rate is 1 or 2...
                        if (np.abs(int(samplerate) - (samplerate)) > 0.0001):
                            # do anti-aliasing
                            interpolation = 'hanning'
import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
import timeit

# a = plt.imread('rings_sm_orig.gif')
x = np.arange(2000) / 2000 - 0.5
y = np.arange(2000) / 2000 - 0.5

X, Y = np.meshgrid(x, y)
R = np.sqrt(X**2 + Y**2)
f0 = 10
k = 1000
a = np.sin(np.pi * 2 * (f0 * R + k * R**2 / 2))

for filt in ['hanning', 'gaussian', 'lanczos', 'sinc']:
   # approx 10x subsample and 20x subsample, un-even #....
    for dpi in [210, 95]:
        fig, axs = plt.subplots(1, 2, figsize=(4, 2), num=1)
        ax = axs[0]
        ax.imshow(a, cmap='gray', antialiased=False)
        ax.set_position([0, 0, 0.5, 1])
        ax = axs[1]
        ax.imshow(a, cmap='gray', antialiased=False,
                  interpolation=filt, filterrad=1000 / dpi / 2)
        ax.set_position([0.5, 0, 0.5, 1])
        ax.axis('off')
        t0 = timeit.default_timer()
        fig.savefig(f'{filt}dpi{dpi}.png', dpi=dpi)
        print('filt', filt, 'dpi', dpi, 'time', '%1.3f'%(timeit.default_timer() - t0))

No anti-aliasing:

  • filt nearest dpi 210 time 0.241
  • filt nearest dpi 95 time 0.382

Hanning:

  • filt hanning dpi 210 time 0.415
  • filt hanning dpi 95 time 0.578

hanningdpi95
hanningdpi210

Gaussian

  • filt gaussian dpi 210 time 1.340
  • filt gaussian dpi 95 time 1.884

gaussiandpi95
gaussiandpi210

Lanczos

  • filt lanczos dpi 210 time 3.236
  • filt lanczos dpi 95 time 7.577

lanczosdpi95
lanczosdpi210

Sinc

  • filt sinc dpi 210 time 8.894
  • filt sinc dpi 95 time 13.192

sincdpi95
sincdpi210

Importance of anti-aliasing even if not "downsampling":

Note that we really need the anti-aliasing filter unless we upsample by more than a factor of two, so this PR proposes doing that:

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
import timeit

a = plt.imread('rings_sm_orig.gif')
print(np.shape(a))

for dpi in [30, 60, 90, 100, 150, 175, 210]:
    fig, axs = plt.subplots(1, 2, figsize=(4, 2), num=1)
    ax = axs[0]
    ax.imshow(a, cmap='gray', antialiased=True)
    ax.set_position([0, 0, 0.5, 1])
    ax = axs[1]
    ax.imshow(a, cmap='gray', antialiased=False)
    ax.set_position([0.5, 0, 0.5, 1])
    ax.axis('off')
    ax.set_title(dpi)
    t0 = timeit.default_timer()
    fig.savefig(f'filtdpi{dpi}.png', dpi=dpi)

filtdpi175
filtdpi150
filtdpi110
filtdpi100
filtdpi90
filtdpi60
filtdpi30

@jklymak
Copy link
Member Author

jklymak commented Mar 21, 2019

Also please note that of course I don't think people should save at 30dpi. The point here is that even at 300 dpi, if someone tries to imshow 10k x 10k matrix they will get aliasing and we really should anti-alias by default unless they explicitly tell us not to (antialiased=False)

@jklymak jklymak added this to the v3.2.0 milestone Mar 21, 2019
if the image will down-sample the data (i.e. if the there is
more data than pixels). If *True*, and the data is down-sampled,
any user supplied filters are ignored, and a Lanczos filter is
applied with radius `2 * M_data / M_image`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 or 3? (it's 3 below...)

@anntzer
Copy link
Contributor

anntzer commented Mar 21, 2019

That is really nice.
On the other hand, speed-wise it's a bit of a disaster -- for a 2000x2000 image (the case initially posted on gitter, rendering takes dozens of seconds (whereas it's basically instantaneous without antialiasing), so I still think a simple boxcar would be useful (if it indeed delivers reasonable performance). Also, thinking again about @tacaswell's comments on gitter, there are indeed cases where other aggregations than mean (or weighted mean, which is basically what this is doing) are useful; for example, the data I'm visualizing is basically a large 0-1 matrix (hence the example) where I want mostly to see the 1's (which are actually rare), so e.g. taking the max() in each pixel would make sense (for my data).
Again, this is to suggest further improvements, not to take anything away from the PR here which is 👍👍

@jklymak
Copy link
Member Author

jklymak commented Mar 21, 2019

The agg filters are written in c and as far as I can tell are just simple convolutions. So it’s not clear if a different shaped filter would be more efficient. 2d convolutions are slow. We could try w scipy.conv2d which should also be pretty optimized. I think the algorithm is NxMxrxr where r is the filter radius.

@@ -5435,7 +5435,8 @@ def get_interp_point(ind):
def imshow(self, X, cmap=None, norm=None, aspect=None,
interpolation=None, alpha=None, vmin=None, vmax=None,
origin=None, extent=None, shape=None, filternorm=1,
filterrad=4.0, imlim=None, resample=None, url=None, **kwargs):
filterrad=4.0, imlim=None, resample=None,
antialiased=True, url=None, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to go at the end (or be kwarg only) to not change the API.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ooops. Also, I should have made a tentative PR....

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed...

@jklymak
Copy link
Member Author

jklymak commented Mar 21, 2019

My guess from some quick playing around w/ scipy.signal.convolve2d is that the agg filters are relatively large (i.e. a few times r) and hence the convolution is slow. It'll take some poking around to see if there is a way to control the size of the filter. Certainly a smaller np.ones((r*2, r*2))/(4*r**2) would be "good enough" and probably a lot more compact..

@@ -422,6 +441,10 @@ def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0,
A_scaled += 0.1
A_resampled = np.zeros((out_height, out_width),
dtype=A_scaled.dtype)
if self.get_antialiased() and A.shape[0] > out_height:
self.set_interpolation('lanczos')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use a simpler filter?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think lanczos is very complicated. https://en.wikipedia.org/wiki/Lanczos_resampling

The other filters (i.e. Gaussian) don't have a radius associated with them, for some reason I can't fathom, so I didn't really play with them, but will do...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simpler filter seems to work great. See update above:

@tacaswell
Copy link
Member

I am not sure it is actually implemented as a convolution, I think it is

for pixel in output_image:
    out[pixel] = apply_kernel(input_image, mapped_to_input_space(pixel), kernel_generator(pixel))

where the kernel_generator deals with the sub-pixel offset between the two grids. The upside is that you only evaluate the convolution where you actually need it (because the output domain demands it) but the downside is you generate the kernel N times, hence my suggestion to use a simpler kernel.

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

@anntzer this doesn't do what you are after, which would be to essentially add more anti-aliasing filters to the agg code. I don't think that would be outrageously hard, but this PR is much more modest in what it is trying to do...

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch 3 times, most recently from a2a3db2 to af9b9a7 Compare March 22, 2019 15:06
@anntzer
Copy link
Contributor

anntzer commented Mar 22, 2019

Can you also add the timings with antialiasing disabled?
I guess some example images with >2x oversampling to show that antialiasing is not needed anymore in that case would be nice too.

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

@anntzer updated above:

filt nearest dpi 210 time 0.241
filt nearest dpi 95 time 0.382

def get_antialiased(self):
"""
Get whether down-sampled images are low-pass filtered before
down-sampling (i.e. when there is more data than pixels).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring needs update

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've not done anything to properly document this yet

@anntzer
Copy link
Contributor

anntzer commented Mar 22, 2019

So less than 2x slowdown. Given the "esthetic" quality improvement, this is really good.
Perhaps add an rcparam and a whatsnew suggesting to turn this on in the matplotlibrc at least? (@tacaswell would you be willing to even consider making this on by default?)

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

This PR proposes to make it on by default.

@anntzer
Copy link
Contributor

anntzer commented Mar 22, 2019

I'm happy with that, but I guess some will complain :)

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

I'm happy with that, but I guess some will complain :)

Maybe, but I don't think they should - the [sampling theorem]((https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem) is pretty inescapable. I don't think we should show aliased images to our users unless they specifically ask for them.

I actually think we should just leave Gaussian on by default: even when upsampling it helps. This image is upsampled by 2.05:

filtdpi210

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

Though I guess if we upsample by an integer we could turn it off: note the thin colours at the corners are washed out w/ the Gaussian blur...

filtdpi200

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

OK, the problem with all this is that manually zooming doesn't re-call _make_image, so you as you start upsampling, it doesn't change the filter... That seems a pretty huge drawback. I guess I don't quite understand when the image gets rendered to canvas where that happens if not _make_image... Does this all have to happen lower down at the draw stage?

@anntzer
Copy link
Contributor

anntzer commented Mar 22, 2019

When upsampling by a large amount, quite often (for some definition of "quite often") one does (I do, at least) expect to see sharp pixel boundaries and the blurring out by the filter is just annoying. Obviously there's no sharp transition between "we need an antialiasing filter" and "we don't need one", but we need to set it somewhere (if we agree we don't want it when the pixels are large) and putting it where the sampling theorem says to put it sounds like a reasonable default.

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

For sure when zooming you want the blur to be un-noticable. OTOH, we use the transform stack to keep track of what part of the image will be drawn makes it challenging to see how zoomed in we are, so its hard to know how many data pixels are geing mapped to how many canvas pixels a-priori... (well or my ignorance of how to get such things out of an arbitrary transform needs to be overcome).

@jklymak
Copy link
Member Author

jklymak commented Mar 22, 2019

New version works w/ zooming. I found it easiest to drag-resize this code to see the changes...

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
import timeit

a = plt.imread('rings_sm_orig.gif')
print(np.shape(a))

for dpi in [200]:
    fig, axs = plt.subplots(1, 2, figsize=(4.1, 2.1), num=1, sharex=True, sharey=True)
    ax = axs[0]
    ax.imshow(a, cmap='gray', antialiased=True)
    ax.set_position([0, 0, 0.5, 1])
    if 1:
        ax = axs[1]
        ax.imshow(a, cmap='gray', antialiased=False)
        ax.set_position([0.5, 0, 0.5, 1])
        ax.axis('off')
        ax.text(100, 190, dpi, fontsize=40)

plt.show()

@jklymak
Copy link
Member Author

jklymak commented Mar 23, 2019

Before working on tests and docs I'd like some consensus that this is the way to go....

@jklymak jklymak added the status: needs comment/discussion needs consensus on next step label Mar 23, 2019
@anntzer
Copy link
Contributor

anntzer commented Mar 24, 2019

I'm definitely +1 on providing this functionality, undecided but leaning towards +1 on having this on by default.

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from 981818c to 04fad9d Compare July 17, 2019 17:16
# compare the number of displayed pixels of the image to the number of
# the data pixels.
disppixels = int(transform.transform([data.shape[1], 0])[0])
if (disppixels < 3 * data.shape[1] and disppixels != data.shape[1]):
Copy link
Contributor

@anntzer anntzer Jul 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compare in both x and y and antialias if at least one of them is below nyquist?
perhaps also disable antialiasing if exactly 1x or 2x zoom? (in which case it's more likely the user exactly adjusted the values for pixel-precise resampling...)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filters are isotropic, so I don't know what we should do if the up/down sampling isn't isotropic.

Copy link
Contributor

@anntzer anntzer Jul 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then at least the condition should either be "if both are <3x" or "if one is <3x", not "if the x-axis is <3x".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But thats what I mean - the x re-sample rate should be the same as the y, unless someone is using a funky transform?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like, its just extra code, so happy to include it if you like...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One could just be doing imshow([[0, 1], [2, 3]], aspect="auto") for example.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, did the y case as well, and added the check for 2x upsample

@@ -354,6 +372,20 @@ def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0,
out_height = int(out_height_base)
out_shape = (out_height, out_width)

# decide if we are going to do anti-aliasing or not on the image
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you also need to do the check here? isn't it redundant with the one in _resample above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, didn't catch this. You refactored this, so I now have that logic where you refactored.

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from 762dc0b to 2d1c785 Compare July 17, 2019 18:02
@@ -3764,21 +3764,24 @@ def test_specgram_freqs():
ax23 = fig2.add_subplot(3, 1, 3)

ax11.specgram(y, NFFT=NFFT, Fs=Fs, noverlap=noverlap,
pad_to=pad_to, sides='default')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just set rcParams["image.interpolation"] = "nearest" above in the function and be done once for all? we reset the rcparams at the end of each test anyways.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooops, that would have saved some typing. OTOH, I've already done it, can't hurt to leave it in explicitly..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It adds a bit of noise to the code :(
You could just do git checkout HEAD~ lib/matplotlib/tests/test_axes.py to undo these changes, but it's up to you.

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from 2d1c785 to 5924668 Compare July 17, 2019 18:10
@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from 65ae08a to 08a98be Compare July 17, 2019 20:41
Copy link
Contributor

@anntzer anntzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great :)

@jklymak
Copy link
Member Author

jklymak commented Jul 17, 2019

I actually do not understand the test failure. Its saying "antialiased" is not a valid value for "interpolation", but all the other tests work fine. And I can't reproduce locally, ie. the failing test runs fine locally.

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from ae46fb3 to 37410a6 Compare July 18, 2019 01:48
-------------------------------------------------------------

Images displayed in Matplotlib previously used nearest-neighbor
interpolation, leading to aliasing effects for non-integer upscaling, and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean "downsampling" here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing to

Images displayed in Matplotlib previously used nearest-neighbor
interpolation, leading to aliasing effects for downscaling and non-integer
upscaling.  

@jklymak jklymak force-pushed the enh-antialias-downsampled-images branch from 37410a6 to d7fa884 Compare July 18, 2019 19:40
Copy link
Member

@timhoffm timhoffm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great. Just some optional suggestions for the descriptions in the example.

@@ -915,6 +915,7 @@ def test_nonfinite_limits():

@image_comparison(['imshow', 'imshow'], remove_text=True, style='mpl20')
def test_imshow():
matplotlib.rcParams['image.interpolation'] = 'nearest'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would make sense to add a comment

# use former defaults to match existing baseline image

on every rcParams use that is just there to prevent the creation of new baseline images.

If we have to regenerate the baseline images all at some point anyway, we can easily search for that comment and remove the artificial non-default.

image file. When data that makes up the image has a different resolution
than its representation on the screen we will see aliasing effects.

By default Matplotlib has a mild anti-aliasing filter turned on for images
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe:

The default image interpolation in Matplotlib is 'antialiased'. This uses a hanning interpolation for reduced aliasing in most situations. Only when there is upsampling by a factor of 2 or >=3 a simple nearest neighbor interpolation is used.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to

The default image interpolation in Matplotlib is 'antialiased'.  This uses a
hanning interpolation for reduced aliasing in most situations. Only when there
is upsampling by a factor of 1, 2 or >=3 is 'nearest' neighbor interpolation
used.

Note that factor of 1 sampling is also special cased (not strictly upsampling, but...)

representation has fewer pixels than the data) or mild up-sampling, to a
limit of three-times upsampling.

If a different anti-aliasing filter is desired (or no filter), it can be
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simpler:

Other anti-aliasing filters can be specified ...



###############################################################################
# Now plot. Note that these images are subsampled, with the images
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following images are subsampled from 1000 data pixels to 604 rendered pixels.

plt.show()

###############################################################################
# Note that even up-sampling an image will lead to Moire patterns unless
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Note that even up-sampling an image will lead to Moire patterns unless
# Even up-sampling an image will lead to Moire patterns unless

# The Moire patterns in the "nearest" interpolation are caused by the
# high-frequency data being subsampled. The "antialiased" image
# still has some Moire patterns as well, but they are greatly reduced.
# We can reduce them more, as in the examples below, but better filters
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'better filter + compultationally expensive' information should be moved to the section where it's actually rendered. I'd remove this sentence here completely (If you really want something here, it would just be "See below for even better filters.").

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @timhoffm made the suggested changes as noted above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants