-
Notifications
You must be signed in to change notification settings - Fork 53
Improve subnormal expectations #339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 for making behavior on subnormals implementation-defined. Not sure we should add any feature to determine this behavior. It may indeed make sense to add to Are there more options than "handle subnormals correctly" or "flush them to zero"? |
Quick drive-by comment: On NVIDIA GPUs, no. See this section: https://docs.nvidia.com/cuda/floating-point/index.html#compiler-flags |
I support leaving the behavior on subnormals implementation-defined too. Perhaps, in
For HIP/OpenCL the answer is also no, at least for LLVM-based compilers: https://clang.llvm.org/docs/ClangCommandLineReference.html |
@oleksandr-pavlyk What's the subnormal behavior of oneAPI on Intel CPUs/GPUs? |
Currently the spec doesn't mention subnormal floats, so assumedly the IEEE 754 behaviour stands.
For typical CuPy builds subnormals are flushed to zero, as explained by @leofang in numpy/numpy#18536 (comment).
It does seem bad if IEEE 754 subnormal behaviour is expected but will be seemingly forever violated by one of its adopters (when compiled with default settings). So I wonder if subnormal behaviour could just be specified as out-of-scope. What would be the ramifications?
Notably in #131 and numpy/numpy#18536 there was discussion of a
smallest_subnormal
property forfinfo
, but it was seen as a pretty awkward fit. It might be useful if some kind of information could tell the user that subnormals are not supported, e.g.smallest_subnormal == smallest_normal
.The text was updated successfully, but these errors were encountered: