Skip to content

imshow with interpolation='bicubic' on downsampled large arrays looks blurred #17490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
TomFD opened this issue May 22, 2020 · 9 comments
Closed
Labels
status: needs clarification Issues that need more information to resolve.

Comments

@TomFD
Copy link

TomFD commented May 22, 2020

When a huge array (here 8k x 1k) with "sharp" data is displayed with imshow in a window with lower pixel resolution than that array, it looks blurred.
I'm questioning that this behaviour is right. Here is an example of the matplotlib imshow, with a standard line of code

Code for reproduction

ax.imshow(field,aspect='auto',interpolation='bicubic',cmap='jet',origin='lower')

Actual outcome

grafik

However, the downsampled version (plotted without matplotlib via direct draw and actually using a bicubic algorithm for downsizing) shows the sharp lines I expected.

Expected outcome

grafik

So is matplotlib downsampling more than the display resolution and then interpolating back via bicubic interpolation? Or how can one get a blurred output, if the input array is much larger and is "sharp"?

Do not misunderstand, I'm not interested in "nearest" etc.
Unfortunately, the examples with imshow/bicubic only show interpolation while upsampling.

Thank you,

Tom

Matplotlib version

Operating system: Windows 10
Matplotlib version: 3.1.3
Matplotlib backend (print(matplotlib.get_backend())): TkAgg
Python version: 3.7.7

installed Anaconda and packages via conda in a separated env.

@timhoffm
Copy link
Member

  • Can you provide a minimal example so that we can test?
  • Are you using a high-resolution screen with display scaling > 1?
  • Even though you say you are "not interested in "nearest" etc.", can you try interpolation="linear"? In particular for downsampling it should not bee too different from bicubic.

@tacaswell
Copy link
Member

Are you resampling before or after color mapping?

The relevant code is

if A.ndim == 2:
# if we are a 2D array, then we are running through the
# norm + colormap transformation. However, in general the
# input data is not going to match the size on the screen so we
# have to resample to the correct number of pixels
# TODO slice input array first
inp_dtype = A.dtype
a_min = A.min()
a_max = A.max()
# figure out the type we should scale to. For floats,
# leave as is. For integers cast to an appropriate-sized
# float. Small integers get smaller floats in an attempt
# to keep the memory footprint reasonable.
if a_min is np.ma.masked:
# all masked, so values don't matter
a_min, a_max = np.int32(0), np.int32(1)
if inp_dtype.kind == 'f':
scaled_dtype = A.dtype
# Cast to float64
if A.dtype not in (np.float32, np.float16):
if A.dtype != np.float64:
cbook._warn_external(
f"Casting input data from '{A.dtype}' to "
f"'float64' for imshow")
scaled_dtype = np.float64
else:
# probably an integer of some type.
da = a_max.astype(np.float64) - a_min.astype(np.float64)
# give more breathing room if a big dynamic range
scaled_dtype = np.float64 if da > 1e8 else np.float32
# scale the input data to [.1, .9]. The Agg
# interpolators clip to [0, 1] internally, use a
# smaller input scale to identify which of the
# interpolated points need to be should be flagged as
# over / under.
# This may introduce numeric instabilities in very broadly
# scaled data
# Always copy, and don't allow array subtypes.
A_scaled = np.array(A, dtype=scaled_dtype)
# clip scaled data around norm if necessary.
# This is necessary for big numbers at the edge of
# float64's ability to represent changes. Applying
# a norm first would be good, but ruins the interpolation
# of over numbers.
self.norm.autoscale_None(A)
dv = np.float64(self.norm.vmax) - np.float64(self.norm.vmin)
vmid = self.norm.vmin + dv / 2
fact = 1e7 if scaled_dtype == np.float64 else 1e4
newmin = vmid - dv * fact
if newmin < a_min:
newmin = None
else:
a_min = np.float64(newmin)
newmax = vmid + dv * fact
if newmax > a_max:
newmax = None
else:
a_max = np.float64(newmax)
if newmax is not None or newmin is not None:
np.clip(A_scaled, newmin, newmax, out=A_scaled)
A_scaled -= a_min
# a_min and a_max might be ndarray subclasses so use
# item to avoid errors
a_min = a_min.astype(scaled_dtype).item()
a_max = a_max.astype(scaled_dtype).item()
if a_min != a_max:
A_scaled /= ((a_max - a_min) / 0.8)
A_scaled += 0.1
# resample the input data to the correct resolution and shape
A_resampled = _resample(self, A_scaled, out_shape, t)
# done with A_scaled now, remove from namespace to be sure!
del A_scaled
# un-scale the resampled data to approximately the
# original range things that interpolated to above /
# below the original min/max will still be above /
# below, but possibly clipped in the case of higher order
# interpolation + drastically changing data.
A_resampled -= 0.1
if a_min != a_max:
A_resampled *= ((a_max - a_min) / 0.8)
A_resampled += a_min
# if using NoNorm, cast back to the original datatype
if isinstance(self.norm, mcolors.NoNorm):
A_resampled = A_resampled.astype(A.dtype)
mask = (np.where(A.mask, np.float32(np.nan), np.float32(1))
if A.mask.shape == A.shape # nontrivial mask
else np.ones_like(A, np.float32))
# we always have to interpolate the mask to account for
# non-affine transformations
out_alpha = _resample(self, mask, out_shape, t, resample=True)
# done with the mask now, delete from namespace to be sure!
del mask
# Agg updates out_alpha in place. If the pixel has no image
# data it will not be updated (and still be 0 as we initialized
# it), if input data that would go into that output pixel than
# it will be `nan`, if all the input data for a pixel is good
# it will be 1, and if there is _some_ good data in that output
# pixel it will be between [0, 1] (such as a rotated image).
out_mask = np.isnan(out_alpha)
out_alpha[out_mask] = 1
# Apply the pixel-by-pixel alpha values if present
alpha = self.get_alpha()
if alpha is not None and np.ndim(alpha) > 0:
out_alpha *= _resample(self, alpha, out_shape,
t, resample=True)
# mask and run through the norm
output = self.norm(np.ma.masked_array(A_resampled, out_mask))

This discussion in #13724 may also be of interest.

Without a minimal example to test with or any context as to how you generated your second image there isn't much we can do to help you.

@tacaswell tacaswell added the status: needs clarification Issues that need more information to resolve. label May 23, 2020
@tacaswell tacaswell added this to the unassigned milestone May 23, 2020
@TomFD
Copy link
Author

TomFD commented May 24, 2020

Thank you so far. I checked and its not a resultion / windows magnification issue. The other hint of how the image is resampled gets closer. However, I'm a user and don't know where to find the "_resample" code.
Nevertheless, I made a minimum example. The array is much larger than the pixel resolution of the figure and I'm plotting this array with interpolation of 'none','nearest' and 'bicubic'.
The 'bicubic' looks nice .. but I still don't understand, when the bicubic DOWN-sampling does (that much) blurring.
For comparision, I made a last plot where I manually downsampled. I expected something like this to see in the high res 'bicubic' version as well. You can change the downsampling factor to e.g. 1/4 or the order to 2, but as it then does not need any interpolation, no blurring occurs.

Code for reproduction

import matplotlib.pyplot as plt
import numpy as np
import math
from scipy import ndimage
Nx=2048
Ny=2048
gw=4

x = np.arange(Nx)
y = np.arange(Ny)

X, Y = np.meshgrid(x, y)
a = np.exp(-(X-Nx//2+(-Y+Ny//2)*4)**2/gw**2)

fig = plt.figure()

i=1
for intp in ['none', 'nearest', 'bicubic']:
    ax = fig.add_subplot(2,2,i)
    ax.imshow(a,origin='lower', aspect='auto',cmap='jet',interpolation=intp)
    i+=1

b = ndimage.zoom(a,1.0/math.pi**2,order=0)
ax = fig.add_subplot(2, 2, i)
ax.imshow(b, origin='lower', aspect='auto', cmap='jet', interpolation='none')

plt.show()

@jklymak
Copy link
Member

jklymak commented May 24, 2020

https://matplotlib.org/devdocs/gallery/images_contours_and_fields/image_antialiasing.html Basically if you sub-sample you should smooth first so you don’t get aliasing of small scales into large scales.

@TomFD
Copy link
Author

TomFD commented May 24, 2020

Ok, I hope I understand: When downsampling, the original is smoothed first (antialiased?), then downsampled. It means, that any of the interpolations are then actually not used, but indicate, that the smoothing should be done? Only for 'none' and 'nearest' there is no smoothing before the downsampling. Thats why, I was confused. I think I have seens an intermediate parameter named "antialiased" in one of the posted discussions, but it did not make it to a release ...

@jklymak
Copy link
Member

jklymak commented May 24, 2020

If you downsample the “interpolation” operation is still applied to get the downsampled points. This acts as a smoothing unless you choose nearest. But nearest is a terrible way to down sample a signal because it causes moire patterns.

The “antialiased” method applies the smoothing (Hanning) if downsampling or resembling within a factor of two and no interpolation if not. I’m not at my computer so can’t check what version that will be released in. Maybe 3.3

@jklymak
Copy link
Member

jklymak commented May 25, 2020

interpolation='antialiased' is in version 3.2, which is released.

How are you diagnosing "blur"? If I set the dpi of the figure to 100 or so and save as a png, then I get speckles for the subsampled data, and a straight line about one or two pixels wide for the smoothed version. (I've zoomed and screenshotted to see the detail). Thats basically how anti-aliasing works: you smooth at slightly larger scale than your new Nyquist wavelength and then sub-sample. The cost is that the resulting signal is a bit smoothed out.

None:

None

Bicubic:
bicubic

@TomFD
Copy link
Author

TomFD commented May 26, 2020

I think this clarifies it, thank you very much. Saving it to a png helped, it might be that the original screenshot was an overlay of bicubic downsampling and the windows screen zoom settings.

@TomFD TomFD closed this as completed May 26, 2020
@jklymak
Copy link
Member

jklymak commented May 26, 2020

Right it’s confusing to try and figure these things out when the number of pixels In the image changes when you zoom.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: needs clarification Issues that need more information to resolve.
Projects
None yet
Development

No branches or pull requests

4 participants