You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the above output with bad precision is produced on the following 4 CPU core Linux system,
$ uname -ap
Linux [..] 4.4.30-mod-std-ipv6-64 #9 SMP Tue Nov 1 17:58:26 CET 2016 x86_64 GNU/Linux
$ cat /proc/cpuinfo | grep "model name" | head -n 1
model name : Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz
while the same environment produces the correct result exactly on the following (2 CPU core) Linux system,
$ uname -a
Linux 4.11.6-gentoo #9 SMP Fri Sep 29 14:28:20 CEST 2017 x86_64 Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz GenuineIntel GNU/Linux
This issue was originally indirectly detected in scikit-learn/scikit-learn#11711 where it was reported for Linux, Mac OS, and Windows. Thanks to @TomDLT for a reproducible minimal numpy code snippet.
Additional information
Creating the python 3.7 env with conda then installing numpy OpenBLAS wheel with pip doesn't have this issue
in conda installing numpy together with openblas nomkl also doesn't have this issue, so it seem that it's due to MKL.
64 bit floats don't appear to be impacted: the obtained absolute precision is 1e-12, for number of a few 1e3, which appears consistent with ~16 digits precision.
You cannot expect better than 32 bit precision with 32 bit inputs. ISTR from the BLIS discussion that there may be an option in the future to have higher precision in the accumulation, but no plans for that at the moment.
Yes, it makes sense, I guess calculations in OpenBlas just happen to be unchanged, there is no guarantee that they would be (depending on the optimizations / implementation used internally). Closing.
The output of
numpy.dot
appears to be impacted by the total array shape (possibly only with MKL.)Example
Expected output
Identical arrays printed in both cases,
Actual output
i.e. only a 1e-4 absolute precision.
Environment information
Using the following conda environment,
the above output with bad precision is produced on the following 4 CPU core Linux system,
while the same environment produces the correct result exactly on the following (2 CPU core) Linux system,
This issue was originally indirectly detected in scikit-learn/scikit-learn#11711 where it was reported for Linux, Mac OS, and Windows. Thanks to @TomDLT for a reproducible minimal numpy code snippet.
Additional information
openblas nomkl
also doesn't have this issue, so it seem that it's due to MKL.MKL_NUM_THREADS=1
,MKL_DOMAIN_NUM_THREADS=1
doesn't help.The text was updated successfully, but these errors were encountered: