Skip to content

Add PiecewiseLinearNorm #4666

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 17 commits into from

Conversation

dopplershift
Copy link
Contributor

Renamed version of #3858.

Tests are failing on my machine due to my numpy 1.9.2 interp dropping masked arrays:

import numpy as np
vmin,vcenter,vmax = -1.0,1.5,4.0
x = [vmin, vcenter, vmax]
y = [0.0, 0.5, 1.0]
a = np.arange(-2, 5)
a = np.ma.array(a)
a[0] =np.ma.masked
np.interp(a, x, y)

I see:

array([ 0. ,  0. ,  0.2,  0.4,  0.6,  0.8,  1. ])

no mask

@jenshnielsen
Copy link
Member

I have tested the numpy issue and it looks like it has been like this since at least 1.6.0

@efiring
Copy link
Member

efiring commented Jul 12, 2015

  1. numpy interp: I think this is to be expected. For the most part, plain numpy doesn't know how to handle masked arrays, which is why the ma module is so big. Here, explicit handling seems to be in order--you need to do the interpolation using only the valid values. The interp docstring says nothing about handling nans, either.
  2. I think the idea for "piece-wise-linear" is that it is general, and can be applied with any number of segments. The version here is a special case with two segments. It would be nice to handle the general and special cases from the outset.
  3. What about the inverse? The basic linear Normalize norm is just a special case of piece-wise linear; the latter should be just as invertible as the former.

@tacaswell tacaswell added this to the next point release milestone Jul 12, 2015
@tacaswell tacaswell added the Release critical For bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. label Jul 12, 2015
@tacaswell
Copy link
Member

I am a bit nervous that the inverse was removed.

@OceanWolf
Copy link
Member

Any update on this?

@OceanWolf
Copy link
Member

I will take a look at this tomorrow and see what I can add. Should I make a PR to merge into this branch, or work on top of this in a separate PR?

The data value that defines ``1.0`` in the normalized data.
Defaults to the the max value of the dataset.

clip : bool, optional (default is False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This parameter actually has no effect. Seems to me that (at least in numpy 1.9.2) np.interp always does clipping:

In [34]: np.interp(np.arange(10), [4, 5, 6], [0, 0.5, 1])
Out[34]: array([ 0. ,  0. ,  0. ,  0. ,  0. ,  0.5,  1. ,  1. ,  1. ,  1. ])

Need to allow it in __call__ since colorbar can pass a value for clip.
To make it more similar to the other norms

Also remove misleading comment about returning scalars
Since possibly some earlier versions of numpy returned a scalar,
wrap the value in atleast_1d before indexing.
np.interp handles the case vmin == vcenter, we have to add a special case
to make the last test case pass
@jkseppan
Copy link
Member

jkseppan commented Sep 3, 2015

I added some suggestions on my branch, also submitted as a pull request to @dopplershift's branch.

@OceanWolf
Copy link
Member

just looked at it dopplershift#2 but we still lack the generality... I think the API should read:

def __init__(self, stops, clip=False):

We need stops as a kind of dictionary, not too sure on that yet. Whether {<cmap-fraction>: <value>} or {<value>, <cmap-fraction>}, I guess we would want an ordered dictionary, but as we still support python-2.6 that means we should do it as a list of tuples... [(, ), (, )], with this structure we have:

class CenteredLinearNorm(PiecewiseLinearNorm):
  def __init__(self, vmin, vcenter, vmax, clip=False):
    stops = [(0., vmin), (0.5, vcenter), (1., vmax)]
    PiecewiseLinearNorm.__init__(self, stops, clip)

PiecewiseLinearNorm will probably want to do some sort of validation to ensure order, in which case we can accept any kind of dict and build our own sorted dict.

@tacaswell
Copy link
Member

I also started working on this a while ago, but got distracted. I was envisioning an API something like

tacaswell@de5eb53

@OceanWolf
Copy link
Member

@tacaswell Any reason why you have them as two separate lists? We have a 1:1 mapping, this would require an extra check that the two lists have equal lengths, and if we have two lists, we can easily put them into ordered dict format with zip(map_points, data_points).

@tacaswell
Copy link
Member

Because they have different units and because there are less ways to input
this wrong.

If we take a list of tuples then we also need to take a 2D arrays, which
means we need to deal with the transpose, etc.

Also, this matches the underlying np.intep API so we can just pass
through to that and let numpy do those checks (we just need to check
monotonic as their docs explicitly say they do not check for that).

On Thu, Sep 3, 2015 at 11:32 AM OceanWolf notifications@github.com wrote:

@tacaswell https://github.com/tacaswell Any reason why you have them as
two separate lists? We have a 1:1 mapping, this would require an extra
check that the two lists have equal lengths, and if we have two lists, we
can easily put them into ordered dict format with zip(map_points,
data_points).


Reply to this email directly or view it on GitHub
#4666 (comment)
.

@OceanWolf
Copy link
Member

Well I don't see different units as a problem... it underpins the whole definition of a dict {'time': 5}, string, int.

With "more ways to input it wrong" I think I see why you say that, because you think in terms of a list of tuples, but as I say we should think of it instead as passing in an OrderedDict, or anything that stops = OrderedDict(stops) won't error, this approach feels better for the future when we finally drop support for python-2.6 (which you have already considered doing recently).

def __init__(self, stops, clip=False):
  stops = OrderedDict(stops)  # will raise a ValueError on bad data
  stops = OrderedDict(sorted(stops.items(), key=lambda t: t[0]))  # sort keys
  map_points, data_points = stops.keys(), stops.values()  # get data ready for np.interp
  if not np.all(np.diff(data_points) > 0):
    raise ValueError("stops must increase monotonically")

We need far simpler error checking here because we utilize ordered dicts, this also makes the class a lot more useful as we simply duck-type the data input based on its dict-like structure. Anything that can get cast to a dict, we accept. While we don't support OrderedDict yet because of our support for python-2.6 we can for now cast to an ordinary dict and then extract and work internally with a list of tuples, this casting to a dict will mean we can also accept OrederedDicts right now, but internally handle them a bit more clumsily until we drop support for 2.6.

@jkseppan
Copy link
Member

jkseppan commented Sep 3, 2015

Are you sure there is a use case for a general piecewise linear mapping? Centering a diverging colormap's neutral color on a specific data value makes a lot of sense to me, but I'm not so sure about the more general case.

@OceanWolf
Copy link
Member

@jkseppan I thought earlier about this earlier today, and I wondered about our accent colour map (a qualitative one), that might work well with data sources where we want to see the local variation, but the data contains big jumps in the data which would drown out that local variation otherwise...

@tacaswell
Copy link
Member

@dopplershift was very excited about this for meteorology where they have color maps with multiple natural break points. The other canonical use case would be to correctly map gist_earth on to elevation data.

@tacaswell
Copy link
Member

@dopplershift @jkseppan This needs a rebase. If this turns back into just a centered norm for now, that is cool it just needs to be renamed.

@jkseppan
Copy link
Member

It seems that the problem with #3858 was finding a fitting name for the norm, and people liked PiecewiseLinearNorm, but that suggests more generality than implemented here. How about DivergingNorm, since it's likely to get used with a diverging colormap? Then we can later implement the more general case as PiecewiseLinearNorm if needed.

@jkseppan jkseppan mentioned this pull request Sep 13, 2015
@OceanWolf
Copy link
Member

I have just started on the generic version of this, we shall see how far I get...

@tacaswell
Copy link
Member

Sold on diverging norm

On Sun, Sep 13, 2015, 10:08 OceanWolf notifications@github.com wrote:

I have just started on the generic version of this, we shall see how far I
get...


Reply to this email directly or view it on GitHub
#4666 (comment)
.

@tacaswell tacaswell modified the milestones: next point release, proposed next point release Sep 14, 2015
@NelleV NelleV removed the Release critical For bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. label Nov 7, 2016
@tacaswell tacaswell modified the milestone: 2.1 (next point release) Aug 29, 2017
@jklymak
Copy link
Member

jklymak commented Oct 6, 2018

closing in lieu of #5054

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants