Skip to content

ENH: Added FuncNorm and PiecewiseNorm classes in colors #7294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 33 commits into from

Conversation

alvarosg
Copy link
Contributor

@alvarosg alvarosg commented Oct 17, 2016

Summary [Last edit 16/12/2016]:

This has now been split into different PRs:
-Implementing _StringFuncParser (Merged)
-Implementing FuncNorm

Summary [Last edit 25/10/2016]:

  • There is a new class called FuncNorm that can take any function and use it for the normalization. LogNorm, or PowerNorm, could be reimplemented using this.
  • There is a new class PiecewiseNorm inheriting from FuncNorm that allows specifying a list of non-divergent functions (lambda functions or special strings) to map different ranges of the data dynamic range, into different ranges of the colorbar range. It creates internally a np.piecewise function hiding all the complexity of making a piecewise function that maps non-linearly but continuously in different intervals to [0-1].
    • The user has to provide a list of functions, and a list of reference points for the data and the colorbar. For example:
      • Take all data up to -1, and map it linearly to the colorbar range 0-0.3
      • Take all data between -1 and 1, and map it quadratically to the colorbar range 0.3-0.5
      • Take all data after 1, and map it cubically to the colorbar range 0.5-1
      • Specified as: colors.PiecewiseNorm(flist=['linear', 'quadratic', 'cubic'], refpoints_cm=[0.3, 0.5], refpoints_data=[-1., 1.])
    • There are some predefined functions, so the user can specify 'linear', 'quadratic','sqrt',..., and this will be automatically parsed into actual functions. The advantage is that in that case the user does not have
      to provide the inverse functions because we know them.
    • If the user provides lambda functions instead of a predefined string, then the inverse function is necessary.
    • The only condition for the provided function is that it must be monotonic and not diverge in the [0-1] interval. The class will automatically normalise the provided function so f(0)=0 and f(1)=1.
    • Also, the ticks method is improved to guarantee that there is always one tick in the colorbar at all the interval changes, and that the rest of the ticks available are spread evenly through the colorbar according to the size of each interval, landing at "relatively round" numbers.
    • ```SymLogNorm` could be reimplemented using this.
  • Due to the power of this class, then all subclasses can derive from this:
    • MirrorPiecewiseNorm: It is similar to PiecewiseNorm with two intervals, with the difference that both intervals are not treated equally from a translational point of view, but symetrically. In an scenario where the user would like to amplify features around 0, for both the positive and negative sides symmetrically, but strongly for the negative side, then he would have to do:
      • Set a positive range with f(x)=sqrt(x). Set a negative range with f(x)=-cubicroot(-x+1)+1, which is a bit awkward.
      • MirrorPiecewiseNorm automatically takes care the transformation of the function in the second range internally so the user only has to provide f(x)=sqrt(x) for the positive range, and f(x)=cubicroot(x) for the negative range
    • MirrorRootNorm inheriting from MirrorArbitraryNorm, it would let the user, achieve the same by just specifying colors.MirrorRootNorm(orderpos=2, orderneg=3) .
    • RootNorm inheriting from FuncNorm, to perform root normalization by simply specifying the order colors.RootNorm(order=3).

TO DO list:

  • Make everything inherit from FuncNorm
  • Implement it to use np.piecewise internally
  • Implement examples
  • Implement tests
  • Include fully documented docstrings with full description and examples

Some examples of usage and outputs:

This is an example of an array with a large dynamic range using a regular linear scale. It consist of several regions:

  • (1<y<2) Linear transition between -1 and 0 in the left half, and between 0 and 2 in the right half.
  • (0<y<1) Non linear wavy function: cos(x) * y**2
  • (-0.5<y<0) Series of bumps sitting at zero, growing linearly between 0 and 2.
  • (-1<y<0.5) Series of negative bumps (depressions) sitting at zero, growing deeper between 0 and -1.
  • (-1.5<y<1) Series of small bumps of 0.05 amplitude sitting at -1, -0.8, -0.6, -0.4, -0.2 and 0 and of 0.1 amplitude sitting at 0.4, 0.8, 1.2, 1.6, 2.
  • (-2<y<1.5) Series of negative small bumps of -0.05 amplitude sitting at -1, -0.8, -0.6, -0.4, -0.2 and 0 and of -0.1 amplitude sitting at 0.4, 0.8, 1.2, 1.6, 2.
    figure_1
    figure_0
    Example of log normalization using FuncNorm. I have set manually the minimum value to 0.01 to fix the dynamic range that will be plotted. Negative features are not shown.
norm = colors.FuncNorm(f=lambda x: np.log10(x),
                       finv=lambda x: 10.**(x), vmin=0.01)

figure_2
or alternatively with:

norm = colors.FuncNorm(f='log',vmin=0.01)

Example of root normalization using FuncNorm. Of course with direct sqrt normalization, negative features cannot be shown.

norm = colors.FuncNorm(f='sqrt', vmin=0.0)

figure_3

Performing a symmetric amplification of the features around 0

norm = colors.MirrorPiecewiseNorm(fpos='crt')

figure_4

Amplifying positive and negative features standing on 0.6

norm = colors.MirrorPiecewiseNorm(fpos='crt', fneg='crt',
                                  center_cm=0.35,
                                  center_data=0.6)

figure_5

Amplifying positive and negative features standing on both -0.4 and 1.2. The bumps in the bottom area (-2<y<-1) sitting at the values -0.4 (light blue) and 1.2 (orange) are amplified, both the positive and the negative bumps. We can observe that the ticks are not exactly equispaced, to have a tick at the interval changes, and at round numbers.

norm = colors.PiecewiseNorm(flist=['cubic', 'crt', 'cubic', 'crt'],
                            refpoints_cm=[0.25, 0.5, 0.75],
                            refpoints_data=[-0.4, 1, 1.2])

figure_6

Amplifying only positive features standing on all -1, -0.2 and 1.2. The bumps in the bottom area (-2<y<-1) sitting at the values -1 (black), -0.6 (turquoise) and 1.2 (yellow) are amplified, but in this case only the positive bumps.

norm = colors.PiecewiseNorm(flist=['crt', 'crt', 'crt'],
                            refpoints_cm=[0.4, 0.7],
                            refpoints_data=[-0.2, 1.2])

figure_7

Copy link
Member

@NelleV NelleV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch! I unfortunately don't have time to do a detail review of the code so I pointed out only the elements that are not matching our contribution requirements (which we are documenting right now…).

Mostly, make sure that
(1) your code is pep8 (as far as I could tell, your code is, but the examples aren't)
(2) docstring are in numpydoc format
(3) examples should be moved to the examples folder, and if possible use the object-oriented mpl interface.
(4) You'll have to add a test at some point as well, but we can worry about this later.

Do you mind adding the figures your patch generates to give an idea of how it looks?

Thanks!
Nelle

@@ -0,0 +1,62 @@
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples need to be moved to the examples folder in the root directory.
Can you also make sure to follow our (very new) docstring standard for examples, with a title and a description: http://matplotlib.org/devdocs/devel/MEP/MEP12.html (see the sphinx docstring example).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback!
Do you have any suggestions where in particular in the example folders? I knew the example folders was preferred but I did it in the doc folder because that is where similar examples (Other normalization examples) were located.

Sure, I will change all the formatting for the docstrings. Even though I read about that formatting I tried to follow the same formatting that was being used in the nearby classes, but I will change that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably move all of those in the examples folder - else they won't be dealt with properly in the gallery.
The title + description element is a fairly recent thing - we'd like to use sphinx-gallery that requires such formatting..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now added and commited the docstring to the examples, however I have not yet moved it to the examples folder, as I am not sure which subfolder is the most appropriate, or whether I should create a new subfolder.

import matplotlib.pyplot as plt
import matplotlib.cm as cm

x=np.linspace(0,16*np.pi,1024)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code is not PEP8 compatible. Try running the tool flake8 to identify those pep8 incompatibility. Most text editor provide plugin for flake8 (SublimeText, vim, emacs…).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok thanks! I was not sure if the PEP8 compatibility also applied to the examples. I will correct this.

figsize=(16,10)
cmap=cm.gist_rainbow

plt.figure(figsize=figsize)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you try using the object oriented interface instead of the pyplot one? It's more powerful and flexible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean, fig=plt.figure(figsize=figsize), and then ax=fig.add_subplot,.. and ax.plot , etc?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default figsize should be good. The default approach should be

fig, ax = plt.subplots()
ax.pcolormesh(...)

center=.5,
vmin=None, vmax=None, clip=False):
"""
*fpos*:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use numpydoc format for docstring.

@alvarosg
Copy link
Contributor Author

alvarosg commented Oct 17, 2016

[Edit: 23/10/2016] Examples used to be here. Now they are at the top.

@tacaswell tacaswell added this to the 2.1 (next point release) milestone Oct 17, 2016
@tacaswell
Copy link
Member

Can you re-implement the existing PowerNorm classes as special-cased sub-classes of these?


if orderneg is None:
orderneg = orderpos
ArbitraryNorm.__init__(self,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please use super()?

@tacaswell
Copy link
Member

Nifty!

@alvarosg
Copy link
Contributor Author

@tacaswell
In principle most of the existing normalizations could be built on top of this. An exception to this is the SymLogNorm, since as the documentation points out, due to the divergence of the logarithm around 0, it requires more complex behavior, which the current implementation solves by adding an extra linear region to safely cross through zero, and this would not be compatible with the ArbitraryNorm classes.

In any case unless there are some existing unit tests for the current implementation of PowerNorm, so the new implementation could be checked for back compatibility, I am not sure if I would feel confident to guarantee if the new implementations will behave in exactly the same way, exceptions an all, at least not yet.

@anntzer
Copy link
Contributor

anntzer commented Oct 18, 2016

In general I like the idea a lot. However...

  1. Do we really need the inverse of the normalizer? After a quick grep (for r"\b\.inverse\b") through the codebase, it appears that this inverse is used exactly at one place: by ColorbarBase._process_values, to find values (self._values) that will, upon normalization, be uniformly distributed over [0, 1], so that the colorbar can later be drawn as pcolormesh(..., self._values, cmap=self.cmap, norm=self.norm). But we can trivially avoid that requirement by instead storing the normalized values (if a list of values is explicitly passed into ColorbarBase, then ColorbarBase can start by immediately applying the normalizer) and then later drawing the colorbar as pcolormesh(..., self._normalized_values, cmap=self.cmap, norm=<linear normalizer>).

Not having to pass in the inverse (which would require a bit of reengineering of ColorbarBase, for sure) would seem to be a big improvement to the API.

  1. I would also stick to implementing a single class (ArbitraryNorm) without support for separate fpos and fneg, and simply document how variants can be implemented. The signature could be as simple as
ArbitraryNorm(normalizer, [inv_normalizer])

(where the need for the inverse depends on how 1. is resolved). np.piecewise directly allows forcing the value at zero (and this should be documented, of course). Your example could be written as (something like)

ArbitraryNorm(
    lambda x: np.piecewise(x, [x < .4], [-(.4 - x) ** .5, (x - .4) ** .2])
)

(assuming you normalize yourself the range of the normalizer to [0, 1] after calling it).

Basically, this is trading off API complexity in ArbitraryNorm vs. whether you already know np.piecewise or not. If you didn't know np.piecewise then in both cases you have to learn an extra API, but if you already did, then it's a clear gain.

  1. Please make whatever attributes you give to your objects (self.fneg, etc.) private (with a leading underscore). That'll avoid tons of issues later when someone will want to change that class and run into back-compatibility issues :-) (Especially for classes like this, where there shouldn't be many reasons (?) to want to access these attributes.)

@alvarosg
Copy link
Contributor Author

@anntzer Thank you for the feedback :) Some replies below!

  1. Yeah, I already knew that the inverse function was only used for setting uniform ticks in the colorbase space. However, I am not so sure if what you are proposing will achieve exactly the same thing we have know.

Essentially what a normalisation does is to describe a biyection from A->B where A is the real numbers, and B are the real numbers in the [0-1] interval. And when the data is converted to the [0,1], you want to have a linear representation of B, in both the plot and in the colorbar. The only difference comes in the ticks really, because the values assigned to the ticks still need to be the values from A. But the colorbar still needs to be linear en B, and the only way to achieve uniform spacing in B, with labels in the ticks that correspond to the values in A, is to be able to invert the function. From a mathematically point of view seems analytically impossible.

I would have to see the exact implementation of the code, but my feeling is that you would achieve a colorbar that is linear in A, and that would compress the colours in the regions where numbers in A are closer together. Essentially the class would amplify the features in the figure, but giving it more colors to certain regions, but then the colorbar will not show a fair distribution of the colors. In fact, I think the colorbar essentially would look like the bottom part of this plot, with the colors not spread equally:


.
Another option would be to numerically calculate the inverse for the require points. Since this is only called for a couple of points, and the direct function should be biyective, it should be very straightforward for any gradient descent minimizer to solve for x0 in the equation f(x0)=y0, however, even though small, this may have an impact on the performance.

  1. I like the idea of merging the three classes together, but I am not sure if using np.piecewise to give single lambda doe snot make things more confusing. But maybe we can reach a middle term.

Essentially there are 3 use cases:
A-The user wants a single arbitrary transformation. In this case, only a function (and inverse) has to be provided. This is what no PositiveArbitraryNorm does, and in fact this is also what NegativeArbitraryNorm does. The only different between the two is how the function is interpreted. For eample, given a square root function, PositiveArbitraryNorm will amplify the features toward the low end of the interval, and NegativeArbitraryNorm will amplify the features towards the low end of the interval. So, in the end, if I refactor the code, I think everything could be down to a change of sign.
Arguments:
Function (and inverse)

B-The user wants two arbitrary transformations. For example, if the user wants to amplify values around 0, both on the positive and the negative side, then two transformations may be applied, one for values above 0, and one for values below 0. Of course, there are analytical functions that would achieve this with a single function, but independent control over the positive side and the negative side would be much more difficult. In this case two functions, and a reference point may be applied.
Arguments:
FunctionPos (and inverse)
FunctionNeg (and inverse)
ReferencePoint: This does not exist yet it would be around which value in the original space we are doing the change of transformation. The currently implementation assume this is always 0.
ReferencePointCB: Which fraction of the colorbar will correspond to "ReferencePoint". This would be the equivalent to center, which now is the position of 0 in the colorbar.

C-The user wants again the double transformation, but wants to only applied it in a symmetric way, so, really only a function needs to be specified.
Arguments:
FunctionPos (and inverse) ->FunctionNeg (and inverse) (Implicitly)
ReferencePoint:
ReferencePointCB:

Maybe the prototypes could be:

A - ArbitraryNorm(normalizer[1], [inv_normalizer[1]])

B - ArbitraryNorm(normalizer[2], [inv_normalizer[2]],center=[ReferencePoint,ReferencePointCB])
ArbitraryNorm(normalizer[2], [inv_normalizer[2]],center=ReferencePointCB) Assume ReferencePoint=0
ArbitraryNorm(normalizer[2], [inv_normalizer[2]]) Assume ReferencePoint=0, ReferencePointCB=0.5

C - ArbitraryNorm(normalizer[1], [inv_normalizer[1]],center=[ReferencePoint,ReferencePointCB])
ArbitraryNorm(normalizer[1], [inv_normalizer[1]],,center=ReferencePointCB) Assume ReferencePoint=0
without the third option in this case, because otherwise it would be is the same as A. Alternatively we could add an extra parameter to distinguish A from B and C.

  1. Yes I will do this, I have not idea why I I did not do it in the first place. Thanks :)

@anntzer
Copy link
Contributor

anntzer commented Oct 18, 2016

Actually, ticks positions are not computed using the inverse transform (see e.g. the plots at http://matplotlib.org/devdocs/users/colormapnorms.html#symmetric-logarithmic and http://matplotlib.org/devdocs/users/colormapnorms.html#power-law). Only a few cases are special-cased, e.g. log normalization uses the LogLocator class (see matplotlib.colorbar.ColorbarBase._ticker).

I'm not sure it makes sense to try to guess tick locations anyways (for example, in the case of a "split" linear normalization such as the one that your propose, it probably makes sense to put ticks equally spaced above the "zero" and below it, rather than uniformly everywhere; certainly you'd want to have the zero labeled).

I think having some actual use cases for all the classes that you propose, and comparing how they look like using np.piecewise, is the way to decide on 2.

@alvarosg
Copy link
Contributor Author

alvarosg commented Oct 18, 2016

@anntzer
I agree, the inverse function is not used to get the tick positions, but the labels that will go in the tick positions. But of course this is only necessary in that special case when we want the uniform distribution of ticks.

I think in such non linear cases it will be very difficult to guess what the user wants. I agree that we should have a tick at 0, but in my case I would still like to have uniform ticks accross the whole colorbar, and the reason is that the level of detail that one will be able to look at should be related to the distribution of colors, that is what the normalization is for in the first place. My reasoning is that if you only plan to have 25% of your colorbar for negative numbers, is because you do not care that much about details in the negative range, hence, you do not need to be able to read the values as clearly.

Honestly, I think most users using this will end up manually adding the ticks to meet the expectations of the level of detail they want to emphasize, in different regions of the colorbar.

@alvarosg
Copy link
Contributor Author

alvarosg commented Oct 19, 2016

@NelleV
@anntzer
@tacaswell
@mdboom
@lebigot

I just pushed some significant upgrades to this. I would appreciate getting some feedback from you! From the contents point of view I am quite, happy. My plan is to clear the TODO list, and then after that, I am not sure how to proceed. Please, feel free to add stuff to the TODO list.

Cheers :)

@anntzer
Copy link
Contributor

anntzer commented Oct 19, 2016

One thing that needs to be clairifed is how vmin/vmax interact with the function you pass in. I think you basically always do f((x-vmin)/(vmax-vmin)). A quick check suggests that this is what the current PowerNorm does too, but this is actually rather counterintuitive for me: I'd more expect (f(x)-f(vmin))/(f(vmax)-f(vmin)), which is what the log norm does, for example. (From a physical point of view: x may not be defined in a linear scale so linearly rescaling it immediately doesn't make much sense; instead, f is used to linearize the scale.)

@alvarosg
Copy link
Contributor Author

@anntzer
I guess they are two different approaches. In my case, actually the vmin, vmax problem applies to every of the intervals. So for each range data_min, and data_max, needs to be monotonically mapped into cm_min and cm_max.

Right now this is calculated similarly to the first option:
cm_value=f(d_value-d_min) / (d_max-d_min)) * (cm_max-cm_min) + cm_min
where f always meets f(0)=0 f(1)=1

I would have to think about the implications of doing it the other way, and whether it is possible to match the intervals as easily.

@story645
Copy link
Member

As a tangent, can this please not be named ArbitraryNorm? I think that name is just broad enough to be unhelpful, 'specially since there are other types of Norms (like a categorically aware one) that wouldn't fit in. How about NonLinearNorm? or FunctionalNorm or something else just please more precise.

@anntzer
Copy link
Contributor

anntzer commented Oct 19, 2016

Perhaps ContinuousNorm (because I think continuity of the transformation is the only real requirement) or FuncNorm (consistently with FuncFormatter)?

I am personally still opposed to what I see essentially as a rather complex rewrite of np.piecewise, though.

@story645
Copy link
Member

story645 commented Oct 19, 2016

Hmm, PiecewiseNorm? Is there a current way to get a norm out of np.piecewise?

@anntzer
Copy link
Contributor

anntzer commented Oct 19, 2016

My point is that this class should just take a single callable rather than a bunch of them; if the user wants a piecewise-defined normalization he should just construct such a function using np.piecewise (and there should be an example of doing this in the docs).

@story645
Copy link
Member

story645 commented Oct 19, 2016

I think it should be slightly more generic than just taking piecewise. Could it take any function (well kind of...)? Like:

def f(x):
    if x>=0.4:
       return x**0.2
    return x**0.5

FuncNorm(f)

`cause I think it gives the user the flexibility they need/want out of this while also keeping the args/kwargs to a minimum. I think it ends up being more maintainable by inherently not making any assumptions about what the user wants.

(As a side note, this could then possibly be used to build a CategoricalNorm depending on long it takes me to wrap up #6934)

@anntzer
Copy link
Contributor

anntzer commented Oct 19, 2016

You probably cannot (completely) build a catergorical norm out of it because the color drawing assumes that the output of the transform countinuously covers [0, 1](after rescaling).

@story645
Copy link
Member

shrugs I wrote one, just haven't factored it out of that PR into a standalone...but this gives another approach of sorts. Either way, I think I'm 👍 on something like FuncNorm

@alvarosg
Copy link
Contributor Author

@anntzer
@story645

Well I agree that np.piezewise is closely related to this, and that it should be used internally (I will include it when I have some time). However, I still think the argument to the class should not be given already as a piecewise lambda function for two reasons:

  • From having a series of simple functions showing just a trend, such as sqrt root, or quadratic, or linear, to having those functions implemented with the right offsets, and the right normalizations, so they all match at the end of the different ranges, and fit into the 0,1 interval there is a huge difference in code complexity for the inputs needed from the user. What I do is to just ask the user for the individual pieces, and assemble them together into a continuous piecewise function, by forcing them to match at the ends.
  • If I want to be able to choose good places for the ticks automatically, I need to know where the jumps in the piecewise functions are, and I would not have this information if I already get a single piecewise lambda function as input.

Apart from this, I think PiecewiseNorm would be a great name for that class instead of ArbitraryNorm.

About taking any function, of course, this could be very easily implemented, because I would always apply a final linear transformation to the function to make sure that f(vmin)=0 and f(vmax)=1. This would be called FuncNorm.

Ideally, then I would make PiecewiseNorm inherit from FuncNorm. I would buield the piecewise function in the constructor of PiecewiseNorm, and feed it to the constructor of FuncNorm. There would be some caveats regarding vmin and vmax being able to change postplotting with clim, but I think I already thought of a way this can be done.

If you agree, I am happy to implement it this way.

@anntzer
Copy link
Contributor

anntzer commented Oct 20, 2016

As it is right now, and as discussed above, the inverse is not used to find tick positions automatically. In fact I don't think there's really a smart way to do this for arbitrary transforms. For example, I'd say the choice of ticks on your example (https://cloud.githubusercontent.com/assets/12649253/19448591/5e62fb7c-949a-11e6-9ba9-f45090dbff90.png) is rather poor. If anything, I'd manually put ticks at [-1, -.5, -.25, 9, .005, .015, .1, 1, 2].

Another question that I think should be addressed is how often do people need complex normalizations that happen to be representable as piecewise functions? In other words, are we not looking at the wrong target?

In any case I think I've said enough on this PR for now and should let others express their opinion for now :-)

@alvarosg
Copy link
Contributor Author

alvarosg commented Oct 20, 2016

@anntzer
Thank you for your feedback, some comments/thoughts below:

Yes, as you said it is very difficult to find good tick positions for arbitrary transforms, but I do not think is impossible. The image in your link is from the very first version. It currently looks like this. It shows the important ticks (interval changes at -0.4, 1, 1.2), and then it uniformly distributes the rest, which may still give ugly numbers. Nevertheless this could be improved by rounding the numbers automatically to a digit depending on the neightboring numbers, so this should be possible to implement (EDIT: I just gave a try to this, and updated the figure), I think it works acceptably well)l. It will always be up the the user, but a good automatic guess is always appreciated.

About the piecewise normalization, normally they are useful to enhance particular features that are sitting at particular value of the dynamic range. In particular around cero, and this is why SymLog and SymPower, which are essentially piecewise functions with 3 and 2 intervals already existed. Also, the updated example, shows how this can very easily used to enhance features sitting at even more than one value of the dynamic range.

@alvarosg alvarosg changed the title Added ArbitaryNorm and RootNorm classes in colors Added FuncNorm and PiecewiseNorm classes in colors Oct 22, 2016
@phobson phobson added Documentation MEP: MEP12 gallery and examples improvements labels Oct 22, 2016
@alvarosg
Copy link
Contributor Author

alvarosg commented Oct 23, 2016

EDIT: All is good now! The rebase suggested by @QuLogic solved it!

@NelleV
You mentioned in your first comment that I would eventually have to add a test. Well, today I added many tests, covering pretty much all the new code (Except the exceptions), however the coveralls test keeps complaining. I am not 100% sure I understand the report here, it seems to say that I have change 14 files, and then complains about very low coverage, but actually, I have only changed 2 files (colors.py plus test), and created another 2 (examples), and as far as I know only colors.py should be in the coveralls.

Edit: So I think I understand what is happening. For example if I was using "axes" in a previous revision, and then I stop using it, then starts tracking _axes.py, and tells me that I have decreased the coverage. Same happens with figure.py (I started using plt.subplots()). Is this normal?

@efiring
Copy link
Member

efiring commented Dec 15, 2016

Next, stage 2: FuncNorm plus tests.

@alvarosg
Copy link
Contributor Author

@efiring Great, I will start working on it! I assume we should also include an example, right?

@efiring
Copy link
Member

efiring commented Dec 15, 2016

Yes, an example would help.

@alvarosg
Copy link
Contributor Author

I have now created the next PRs related to this, implementing "FuncNorm".

@alvarosg alvarosg mentioned this pull request Dec 16, 2016
@tacaswell tacaswell modified the milestones: 2.1 (next point release), 2.2 (next next feature release) Aug 29, 2017
@lrq3000
Copy link

lrq3000 commented Feb 2, 2018

This PR is just amazing, it needs to be merged! I love the way it allows to precisely tune the colorbar (normalized at 0 or other points, centered or not, intensity amplification on some ranges, etc). Great work!

PS: Also I find your examples above (#7294 (comment)) very useful and meaningful, it would be great if these would be added to the documentation alongside the PR.

Copy link
Contributor

@anntzer anntzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly

  • oppose adding a new domain-specific language to describe functions (root{n}, etc). Yes, the parser has already been merged into cbook but it is not actually used right now and would prefer removing it.
  • believe that we should still first sort out the overlap between scales, norms, locators, and formatters that has been (much) discussed above, before adding more complex functionality.

As usual, other devs should feel free to override this review if there is consensus to do so.

@QuLogic
Copy link
Member

QuLogic commented Feb 2, 2018

@anntzer You should leave that comment on the split PR, #7631, as well.

@anntzer anntzer modified the milestones: needs sorting, unassigned Feb 17, 2018
@anntzer
Copy link
Contributor

anntzer commented Feb 17, 2018

Closing based on lack of comments over the rejection two weeks ago.

@anntzer anntzer closed this Feb 17, 2018
@alvarosg
Copy link
Contributor Author

alvarosg commented Feb 17, 2018

Ironically, the result of a very positive comment about the PR ( thanks @lrq3000 I am glad you liked it :) ) is that it gets closed.

@anntzer I completely respect the decision, however I just wish it would have been made right at the beginning. I spent many hours of my free time implementing changes to fit the opinions and criteria of different contributors and members here, just to, in the end, have the PR closed due to reasons related to the existing design of the library, and that I have not control over. I agree with the issues that you raise, but based on the same issues, the PR could have been closed right after I posted it...

Anyway, maybe next time it will work out better, and please feel free to ping me if the scales/norms/locators are redesigned and I will be happy to reimplement the functionality :)

@anntzer
Copy link
Contributor

anntzer commented Feb 17, 2018

My very first comment (#7294 (comment), made the day the PR was opened) was optimistic about the functionality, but was also already raising issues about the PR design that were never addressed, even though as you mention it is at least in part due to fundamental design issues in Matplotlib. So in my opinion the fundamental question here is whether we should accept merging more and more layers on top of an initial poor design or choose to fix the fundamental problems first. I favor the first option; it does not mean that this PR cannot be merged but for that you need to convince two committers that this is a good approach (this is the standard policy for all PRs, even from other core devs) and so far none has approved the PR.

In my view there is no value on keeping the PR opened forever. But closing it does not mean that we cannot revisit it later, it is simply to remove it from the "general todo list".

@anntzer
Copy link
Contributor

anntzer commented Feb 17, 2018

Also, AFAICT this is the kind of features that should be easily publishable as an independent package providing the custom Norm class. Doing so would allow the module get some real world usage first.

@lrq3000
Copy link

lrq3000 commented Feb 18, 2018

@anntzer Honestly I regret having made my comment. I cannot comment on the implementation since I am not a matplotlib developer and I don't know the codebase, but this PR is obviously a much needed feature, as it is a very common need, which can objectively be considered as core.

Colorbars are one of the basic and essential feature of most scientific graphics, providing ways to normalize is essential to allow readers an intelligible quantification (basically it is necessary almost everytime you have to plot both positive and negative values, this qualifies well for me as a core need and feature...).

I hope this feature will find its way back to matplotlib at some point, else it would be IMHO a great loss...

@phobson
Copy link
Member

phobson commented Feb 18, 2018

I think FuncNorm And PiecewiseNorm would be great too, but the string parsing/language-defining DSL is a non-starter from my perspective. If was limited to callable's I'd approve the PR.

@story645
Copy link
Member

I agree with @phobson, 'specially on FuncNorm when limited to callable funcs.
@lrq3000 you did absolutely nothing wrong and it's good for the devs to know which features are wanted by the community.

@jklymak
Copy link
Member

jklymak commented Feb 18, 2018

I think this PR should stay closed. I'd be into re-opening #7631 that allows the user to specify arbitrary norms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.