-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
ENH: add ND generalizations to np.gradient
#9409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
A less obvious case is when the coordinates do not align with the axes: i, j = np.meshgrid(np.linspace(-2, 2, 5), np.linspace(-4, 4, 9))
x = i + j
y = i - j
z = x**2 + y**2
dzdx, dzdy = np.gradient(z, x, y) # Case C: rotated grid
assert_equal(dzdx, 2*x)
assert_equal(dzdy, 2*y) Stretching further, I'd expect this to work too, with rounding errors r, theta = np.meshgrid(np.linspace(0, 2, 10), np.linspace(-pi, pi, 20))
z = r**2
x = r * np.cos(theta)
y = r * np.sin(theta)
dzdx, dzdy = np.gradient(z, x, y) # Case D: non-linear grid
assert_almost_equal(dzdx, 2*x)
assert_almost_equal(dzdy, 2*y) |
The first case can be "solved" this way x, y = np.meshgrid(np.linspace(-2, 2, 50), np.linspace(-4, 4, 90))
z = x**2 + y**2
dzdy, dzdx = np.gradient(z, y[:,0], x[0, :])
np.allclose(dzdx[1:-1, 1:-1], 2*x[1:-1,1:-1])
np.allclose(dzdy[1:-1, 1:-1], 2*y[1:-1,1:-1])
dzdy, dzdx = np.gradient(z, y[:,0], x[0, :], edge_order=2)
np.allclose(dzdx, 2*x)
np.allclose(dzdy, 2*y) or direcly using the linspace used to build the meshgrid dzdy, dzdx = np.gradient(z, np.linspace(-4, 4, 90), np.linspace(-2, 2, 50), edge_order=2)
np.allclose(dzdx, 2*x)
np.allclose(dzdy, 2*y) note that we may have "errors" on the boundaries if using order 1 approximation In the other examples, you do no longer have a "grid" of points and as I mentioned in the other discussion computing the gradient on arbitrary clouds of points or arbitrary directions is quite more complex and I think that can be considered a completely different "problem". |
Perhaps, but an end user would probably expect this operation to be called It seems that a second-order approach for varying spacing can be derived in a similar way to how you did the 1d stuff in #8446, but instead of using two adjacent points, you use As far as I can tell, your current solution uses the weights that solve the following for each entry (f=forward, b=backwards) \begin{bmatrix}
k \\ k_F \\ k_B
\end{bmatrix}
= \operatornamewithlimits{argmin}_\mathbf{k} \left\|
\begin{bmatrix}
f & 1 & 1 \\
0 & \delta x_F & \delta x_B \\
0 & \delta x_F^2 & \delta x_B^2 \\
\end{bmatrix}
\mathbf{k} - \begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix}
\right\|^2 \begin{bmatrix}
k \\ k_N \\ k_E \\ k_S \\ k_W
\end{bmatrix}
= \operatornamewithlimits{argmin}_\mathbf{k} \left\| \begin{bmatrix}
f & 1 & 1 & 1 & 1 \\
0 & \delta x_N & \delta x_E & \delta x_S & \delta x_W \\
0 & \delta y_N & \delta y_E & \delta y_S & \delta y_W \\
0 & \delta x_N^2 & \delta x_E^2 & \delta x_S^2 & \delta x_W^2 \\
0 & \delta y_N^2 & \delta y_E^2 & \delta y_S^2 & \delta y_W^2 \\
0 & \delta x_N \delta y_N & \delta x_E \delta y_E & \delta x_S \delta y_S & \delta x_W \delta y_W
\end{bmatrix}
\mathbf{k} - \begin{bmatrix}
0 \\ 1 \\ 0 \\ 0 \\ 0
\end{bmatrix}
\right\|^2 Clearly we can't be second-order accurate there, as, that least squares problem doesn't always have an exact solution. In general, the approach of taking direct neighbours in ND solves If we discard the cross-terms, then an exact solution always exists |
I've started trying to implement this, but was blocked by #8827 |
Let me try to restate the problem to see if I have understood correctly: case 1D functions: OK It takes as input:
Issue0: users may want to have the gradient along an arbitrary direction. Issue1: users may have the grid defined as output of meshgrid, that is an array of N N-dimensional coordinates. Note: alignment of axis to the grid is not strict as long as if the grid is simply "rotated"...Case2 does not fall in this category however. Issue2: users may have a cloud of points on a non-structured grid Finally I haven't done the math but I am not sure about your formulation of 2D case... at least there should some minus signs somewhere but I suspect that you might end up also with mixed terms. Plus, but I may be wrong, it is not a least square problem... you should have always one and only one solution. |
I think this is actually equivalent to the problem we're solving? Perhaps scipy having a solution already is a good argument not to extend numpy - although it might be worth linking to that in the docs under "Notes".
I've folded these into the definitions of
those are the bottom line of the matrix, which come from the |
Coming back to this, I'm realizing that the sensible name for this operation is |
And thinking about it some more, your implementation of 1d is actually just fully-expanded Lagrange polynomial fitting. There's a paper on multivariate lagrange polynomial fitting here, which might be relevant |
Taking the enhancement in #8446 even further. This would allow:
Right now, this errors, because
x
andy
are 2dThe text was updated successfully, but these errors were encountered: