You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is about several reducing functions such as np.mean or np.std. These functions seem to return a different result when passed a 2D array together with axis argument instead of 1D array which is manually subscripted. Formally these two methods describe the exact same operation and so they should return the same result. For example:
Numpy takes the freedom of trying to make the computations as fast as possible. Since numerical precision is always limited, doing this changes operation order and thus the result. There is a second effect at work with summing specifically, which can in some case create true large differences (first sight, here it looks like there is probably little serious numerical precision issues involved, mostly due to array sizes).
It would be nice to document that more clearly (especially the large possible differences in some cases), though. If you know somewhere were we should improve the documentation please say so, or propose a change!
Feel free to keep discussing, but there are duplicates of this open and there is not much to do probably, so closing.
The issue is about several reducing functions such as
np.mean
ornp.std
. These functions seem to return a different result when passed a 2D array together withaxis
argument instead of 1D array which is manually subscripted. Formally these two methods describe the exact same operation and so they should return the same result. For example:However the above assertion fails for various sizes of the 0th-axis.
Reproducing code example:
Numpy/Python version information:
Further investigation
Checking various sizes for the 0th-axis for various methods:
This yields the following results:
The text was updated successfully, but these errors were encountered: