Skip to content

gradient() no longer accepts scalar-like wrappers for dx #8292

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dopplershift opened this issue Nov 19, 2016 · 17 comments · Fixed by #9408
Closed

gradient() no longer accepts scalar-like wrappers for dx #8292

dopplershift opened this issue Nov 19, 2016 · 17 comments · Fixed by #9408

Comments

@dopplershift
Copy link

This is with 1.12.0b1. gradient() no longer works with scalar values from a package called pint. With 1.11, this works:

import pint
ureg = pint.UnitRegistry()
u = np.ones((3, 3)) * ureg('m/s')
np.gradient(u, 1 * ureg('m'))

With the beta you get:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-5-1908bac90f0e> in <module>()
      1 u = np.ones((3, 3)) * ureg('m/s')
----> 2 np.gradient(u, 1 * ureg('m'))

/Users/rmay/miniconda3/envs/dev/lib/python3.5/site-packages/numpy/lib/function_base.py in gradient(f, *varargs, **kwargs)
   1596             "invalid number of arguments")
   1597     if any([not np.isscalar(dxi) for dxi in dx]):
-> 1598         raise ValueError("distances must be scalars")
   1599 
   1600     edge_order = kwargs.pop('edge_order', 1)

ValueError: distances must be scalars

It looks like the new behavior was introduced by #7618. It seems like isscalar failed before for pint-based scalar values; the problem is the check being added to gradient. The pint scalars behave just like scalars for all purposes, so IMO there should be no problem passing them in.

It should be noted, though, that gradient itself has other problems working with the array-like objects from pint, and required wrapping gradient to use it anyway. This breaking isn't the end of the world--I just thought you guys should be aware of the impacts.

@charris
Copy link
Member

charris commented Nov 19, 2016

Any thoughts of ways to improve this?

@charris
Copy link
Member

charris commented Nov 19, 2016

Aside, I wonder if the generator needs to be in a list?

@dopplershift
Copy link
Author

List is definitely unnecessary.

The ideal solution is probably to make isscalar work correctly for this case.

@charris
Copy link
Member

charris commented Jan 7, 2017

Hmm, making gradient work correctly here could get tricky, especially with unevenly spaced samples as proposed in #8446. For scalars, it is suggested that custom scalar types inherit from np.generic. What does 1 * ureg('m') look like?

@dopplershift
Copy link
Author

It's just a thin wrapper around a Python integer:

In [1]: import pint
In [2]: import numpy as np
In [3]: ureg = pint.UnitRegistry()

In [4]: 1 * ureg('m')
Out[4]: <Quantity(1, 'meter')>

In [5]: type((1 * ureg('m')).magnitude)
Out[5]: int

@charris
Copy link
Member

charris commented Jan 10, 2017

Seems like isscalar should be fixed to accept instances of int and long (Python2). I assume that `isinstance(ureg, int) works, is that correct?

@dopplershift
Copy link
Author

It'd be more like

s = 1 * ureg('m')
isinstance(s, int)

But regardless, that last check returns False. I think only duck typing would solve that problem--and I understand if that's too much to bite off.

@seberg
Copy link
Member

seberg commented Jan 11, 2017

Not sure, but isscalar actually checks for scalar objects, not even 0-d arrays would fit that for example. Don't we have some other similar way to achieve this?

@ntfdOpenSource
Copy link

ntfdOpenSource commented Feb 17, 2017

The former example (numpy v.1.10):
x = np.array([0, 1, 2, 3, 4])
dx = np.gradient(x)
y = x**2
np.gradient(y, dx, edge_order=2)
also fails, because dx is always an array of arrays and never an array of scalars if varargs is supplied.

@eric-wieser
Copy link
Member

eric-wieser commented Feb 17, 2017

@seberg

Not sure, but isscalar actually checks for scalar objects, not even 0-d arrays

Based on this, is it ever appropriate to use isscalar in its current form? isscalar was the cause of the bug behind #8441. Shouldn't the test just be np.ndim(x) == 0, always?

@ibackus
Copy link

ibackus commented Apr 5, 2017

bump...I'm using another package pynbody which implements SimArrays, wrappers around numpy arrays that carry units (meters, seconds, etc...). Allowing zero dimensional arrays in np.gradient would be a big help. Older versions of numpy worked just fine with this.

@eric-wieser
Copy link
Member

@ibackus: This looks like an easy fix to me, making the change I suggest in the comment above. Do you want to have a crack at implementing it?

@ibackus
Copy link

ibackus commented Apr 5, 2017

@eric-wieser: For now I'll have to pass...I've not contributed to numpy before and don't really have the time to getting things set up so that I can do that properly. all that is required is changing isscalar to np.ndim(x)==0 as you noted.

@maluethi
Copy link

@eric-wieser, I would fix this but I do not grasp your solution fully. I see that np.isinstance is not working properly. Am I right that you are suggesting to get rid of that check and instead check the dimension of x?

@eric-wieser
Copy link
Member

Assuming you mean np.isscalar and not isinstance, then yes - checking dimension == 0 should suffice

@maluethi
Copy link

Looking into it I found this might already been resolved by #8446?

@eric-wieser
Copy link
Member

@maluethi: Good find, but I think the fix is still needed on this line.

You should be able to test this with something like np.gradient(np.ones((3, 3)), np.array(1))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants