-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
np.fft.fft always returns np.complex128 regardless of input type #17801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Correct, this is documented. |
It's perhaps worth adding to our docs that |
I have been using the scipy.fft.fft to preserve the type, (among other benefits like overwrite_x) however, I prefer to keep the dependencies of my libraries to a minimum (just numpy if possible), and this default behaviour for numpy.fft seems strange. Yea, you are correct, this is documented and the expected behaviour, so this isn't a bug report. I am interested though in a bigger conversation as to why the promotion and demotion to np.complex128 is the default behaviour. Is there something fundamental about the fft algorithm used that makes the promotion behaviour preferred either for precision or processing efficiency? |
Supporting 4 types (half, single, double, longdouble) requires some combination of:
These are all a fair amount of work to get right; work which was decided probably not to be worthwhile since scipy has already done it. |
I expect you won't be able to avoid C++ extensions forever, though... aren't there really any plans for this? |
There were preliminary plans about 9 years ago, but they evaporated. A big problem with C++ historically was that the compilers were not reliable and consistent. The error messages in C++ were also garbage and C++ itself was overloaded with "features", writing good C++ is harder than C. I haven't used C++ lately, but I think it has greatly improved after C++11. Go seems to be the new kid on the block. |
Yes ... and Rust, and Julia, etc. :-) Still, I'd bet that if |
It is worth noting that CuPy deliberately deviates from NumPy's behavior precisely because of this upcast concern: |
I’m happy to show how to use the C preprocessor to generate code for 4 types from one template. It is surprisingly simple once you know the tricks. |
xref older issue: #6012 |
The array API requires fft functions to return outputs with the same precision as the input. Given that NumPy is making breaking changes in NumPy 2.0 for array API compatibility (#25076), it would be good to get this fixed for 2.0. Ever since #25536, fft functions are implemented under the hood with ufuncs, although the top-level functions are still pure Python wrappers that call the I'm planning to make a PR to fix this, but I want to know what the best approach to fixing this is. It sounds like the ideal fix would be updating pocketfft, but I also suspect that would be the most difficult to do. Given how close the 2.0 branch date is, would it be better to just manually downcast to get the API breakage in, and update the underlying implementation later? A potential downside with that is that the single precision pocketfft implementation might return different results than a downcasted double precision one, so putting this off would result in a second smaller breakage with different results in NumPy 2.1. If we do just want to manually downcast for now, is it better to do that in the ufunc by adding an Or if people think updating pocketfft would be doable, I can go for that. I think scipy's implementation is already templated C++ that supports single and double precision. What is the status of C++ in NumPy? |
NumPy uses C++ these days, so that is not a problem. I'm curious if the fft interfaces are the same in NumPy and SciPy apart from the support for 32 bits? In any case, downcasting to get compatibility with the array_api seems fine to me, we could bring then bring the SciPy version over in 2.1.0 if it is decided that it is too late for 2.0.0. |
@asmeurer - updating pocketfft to the C++ version was definitely the plan, but I haven't yet looked at it to know whether the interface is easily compatible - though I suspect it is. I did already ensure the pocketfft piece is separated out as much as possible. In principle, with the ufuncs, it is easy to downcast and for doing large stacks of arrays the memory overhead is not too large. Anyway, would be happy to help! |
@mhvk just to be clear, you think downcasting in the ufunc or in the Python wrappers is better? |
My feeling is that switching to C++
So it would be great (IMO), if the circumstances allow it. |
See #25711 for what hopefully is a fix - but please check! |
np.fft.fft returns type np.complex128 regardless of input type. If the input type is np.complex64 returning a complex128 array can have a huge effect on system memory and type casting back with asarray is costly for large arrays.
Reproducing code example:
Expected Behaviour
I would expect np.fft.fft to return the same type (for complex to complex) as the input, or allow control of the return type.
NumPy/Python version information:
1.19.4 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]
The text was updated successfully, but these errors were encountered: