-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Issues with Numpy 1.11 release candidate #5950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Furthermore it raises 2 new future warnings very frequently:
and /Users/jhn/Envs/numpy111betamatplotlib151/lib/python3.5/site-packages/numpy/ma/core.py:3192:
FutureWarning: Currently, slicing will try to return a view of the data, but will return a copy of the mask.
In the future, it will try to return both as views. This means that using `__setitem__` will propagate
values back through all masks that are present. We should probably suppress them somehow. |
I have not tested 2.x or master yet |
Both the "expected" test values and those that are obtained are wrong. The algorithm is intended to remove minor ticks that coincide with major ticks, but the expected values in the test include 1, which is a major tick. The test is now failing because floating point arithmetic is causing the erroneous algorithm to break in a different place. |
I would say that the old test was actually written in a way to accomodate a bug in matplotlib (probably itself stemming from a bug in numpy... numpy/numpy#7022 is the obvious suspect here): if you only plot minor ticks
you see that there is a spurious minor tick at x=1 (but not at the other major tick positions). I don't know exactly how to handle this in the tests while maintaining backcompatibility but from a correctness POV it is pretty clear that the correct |
If you have time, maybe you could check how much numpy/numpy#7187 helps with the future warning spam? If it is almost gone, you can maybe work around it (it is a bit ugly in any case). |
@seberg Thanks I will try to have a look over the weekend |
wrt. the test failure I'd suggest switching between testing for the new value (which is after all the correct one...) and the old value depending on the version of numpy. |
Numpy 10.11 beta 3 is now released on pypi so our allow failures job with python 3.6 and |
The issues with the numpy release candidate have changed a bit and I now see:
|
The changes are in auto limits in these 2 tests (duplicated in png, svg and pdf) This seems to mainly be due to a change in the implementation of divmod in numpy. For example: Numpy 1.10 In [1]: divmod(1.0, 0.2)
Out[1]: (4.0, 0.19999999999999996)
In [2]: divmod(np.float64(1.0), 0.2)
Out[2]: (5.0, 0.19999999999999996)
In [2]: divmod(1.0, np.float64(0.2))
Out[2]: (5.0, 0.19999999999999996)
In [2]: divmod(np.float64(1.0), np.float64(0.2))
Out[2]: (5.0, 0.19999999999999996) Numpy 1.11rc2 In [1]: divmod(np.float64(1.0), 0.2)
Out[1]: (4.0, 0.19999999999999996) divmod is used among other places within ticker.MaxNLocator.bin_boundaries which is responsible for calculating the min and max boundaries in autoscaling. |
The numpy 1.10 result is obviously wrong in this case since 5.0 * 0.2 + 0.19999999999999996 = 1.2 and not 1 as expected. |
Pretty sure this is at least partially handled by #5768 (which includes a local fix to |
@anntzer Looks good. I somehow missed that PR, I think we should make a push for getting this in. |
With #5768 merged the |
The remaining issues with Numpy 1.10 In [1]: 0.40000000099999999//2e-10
Out[1]: 2000000004.0
In [2]: np.float64(0.40000000099999999)//2e-10
Out[2]: 2000000005.0 Numpy 1.11rc2 In [2]: 0.40000000099999999//2e-10
Out[2]: 2000000004.0
In [3]: np.float64(0.40000000099999999)//2e-10
Out[3]: 2000000004.0 |
There seemed to have been a few bugs in the betas; have you tried again with final 1.11? |
I get the second behavior with 1.11 from conda. |
This test only over passed due to an error arising from a bug in numpy's divmod being fixed (numpy/numpy#6127). See matplotlib#5950
This test only over passed due to an error arising from a bug in numpy's divmod being fixed (numpy/numpy#6127). See matplotlib#5950
This is another fallout from numpy/numpy#6127 which #6192 was working around. In those cases the incorrect rounding was hurting us, in this case it was helping us. This is the situation for np 1.10 In [11]: divmod(0.500000001, 2e-10)
Out[11]: (2500000004.0, 1.9999995350196985e-10)
In [12]: divmod(np.asarray(0.500000001), 2e-10)
Out[12]: (2500000005.0, 1.9999995350196985e-10)
In [13]: divmod(np.asarray(0.500000001), 2e-10)[1] - 2e-10
Out[13]: -4.6498030157606349e-17 so we are clearly hitting the edges of 64bit float precision and the cpython / np1.11 values are clearly correct. In [21]: 0.500000001 == ((divmod(0.500000001, 2e-10)[0] * 2e-10) + divmod(0.500000001, 2e-10)[1])
Out[21]: True
In [22]: 0.500000001 == ((1 + divmod(0.500000001, 2e-10)[0]) * 2e-10)
Out[22]: True
In [23]: divmod(0.500000001, 2e-10)[1] == 2e-10
Out[23]: False |
This test only over passed due to an error arising from a bug in numpy's divmod being fixed (numpy/numpy#6127). See matplotlib#5950
have we fixed these issues? |
AFAIK we are skipping the relevant tests on 2.x and master i.e. 26cba4b On 1.5.x I think there are a few additional tests we need to skip. (they are probably failing in the nightly build on travis which installs directly from pypi and skips the wheelhouse. |
Closing as (AFAICT) the only remaining thing to do is to remove the version check on numpy once we only support numpy 1.11+ (but there will be plenty of these). |
I tried running the test suite for 1.5.1 on numpy 1.11 beta 2 and I am seeing a single failure that we should investigate.
Python 3.5 OSX
The text was updated successfully, but these errors were encountered: