-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Default atol value in assert_allclose_dense_sparse is too low #13977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It seems more than reasonable. Actually I think it would make more sense to have 2 tolerances in general. One for float32 and one for float64 through the whole test suite (this is what I did for the _cython_blas tests). |
Check for closeness of new_result and the previously compute result should depend on the dtype of the result, and can not be less than that precision's epsilon.
Would you propose just checking the dtypes of the input and determining the
tol from that, @jeremiedbb? Use the tol corresponding to the less precise
dtype of input?
|
I would go with the first solution. A fixture should make it not too painful to implement I guess. |
Also related to #10562 |
Description
daal4py's estimator
RandomForestRegressor
, powered by Intel(R) DAAL failscheck_fit_idempotent
, see uxlfoundation/scikit-learn-intelex#102The failure is small:
This issue questions the choice of
atol
default parameter value of1e-7
inassert_allclose_dense_sparse
, see sklearn/utils/testing.py#L404.The default value is smaller than epsilon of single precision floating number:
Any threaded execution, can not guarantee bit-wise reproducibility due to possible changes in the order of associative operations, such as adddition or multiplication.
If there are no objections, I would open a PR to increase it to a small multiple of
np.finfo(np.float32).eps
.The text was updated successfully, but these errors were encountered: