Skip to content

np.fft.fft always returns np.complex128 regardless of input type #17801

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kyjohnso opened this issue Nov 18, 2020 · 16 comments · Fixed by #25711
Closed

np.fft.fft always returns np.complex128 regardless of input type #17801

kyjohnso opened this issue Nov 18, 2020 · 16 comments · Fixed by #25711

Comments

@kyjohnso
Copy link

np.fft.fft returns type np.complex128 regardless of input type. If the input type is np.complex64 returning a complex128 array can have a huge effect on system memory and type casting back with asarray is costly for large arrays.

Reproducing code example:

import numpy as np
x = np.random.multivariate_normal([0,0],[[1,0],[0,1]],(1024)).dot([1,1j])
x = np.asarray(x,dtype=np.complex64)
print("dtype of x is: {}".format(x.dtype))
X = np.fft.fft(x)
print("dtype of X is: {}".format(X.dtype))
dtype of x is: complex64
dtype of X is: complex128

Expected Behaviour

I would expect np.fft.fft to return the same type (for complex to complex) as the input, or allow control of the return type.

NumPy/Python version information:

1.19.4 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]

@rossbar
Copy link
Contributor

rossbar commented Nov 18, 2020

Correct, this is documented. scipy.fft does not upcast, so perhaps that would work better for your use-case.

@eric-wieser
Copy link
Member

It's perhaps worth adding to our docs that np.longdouble is downcasted too.

@kyjohnso
Copy link
Author

I have been using the scipy.fft.fft to preserve the type, (among other benefits like overwrite_x) however, I prefer to keep the dependencies of my libraries to a minimum (just numpy if possible), and this default behaviour for numpy.fft seems strange. Yea, you are correct, this is documented and the expected behaviour, so this isn't a bug report. I am interested though in a bigger conversation as to why the promotion and demotion to np.complex128 is the default behaviour. Is there something fundamental about the fft algorithm used that makes the promotion behaviour preferred either for precision or processing efficiency?
Thanks,

@eric-wieser
Copy link
Member

I am interested though in a bigger conversation as to why the promotion and demotion to np.complex128 is the default behaviour.

Supporting 4 types (half, single, double, longdouble) requires some combination of:

  • 4 times as much code
  • Careful application of .c.src-style templating
  • Working out how to build C++

These are all a fair amount of work to get right; work which was decided probably not to be worthwhile since scipy has already done it.

@mreineck
Copy link
Contributor

I expect you won't be able to avoid C++ extensions forever, though... aren't there really any plans for this?

@charris
Copy link
Member

charris commented Nov 20, 2020

aren't there really any plans for this?

There were preliminary plans about 9 years ago, but they evaporated. A big problem with C++ historically was that the compilers were not reliable and consistent. The error messages in C++ were also garbage and C++ itself was overloaded with "features", writing good C++ is harder than C. I haven't used C++ lately, but I think it has greatly improved after C++11. Go seems to be the new kid on the block.

@mreineck
Copy link
Contributor

Yes ... and Rust, and Julia, etc. :-)

Still, I'd bet that if numpy decided to support extensions in just one more language, C++ will offer by far the best return on investment.
I'd encourage discussing with the scipy developers about their experiences with C++ extensions. Personally I have only been involved in one of them, but I'd say the integration went surprisingly well (thanks to the amazing work of @peterbell10 on the developer side and to pybind11 on the interfacing side) and hasn't caused any maintenance pains since then. And I'm really certain there wouldn't have been any other way to provide the same amount of functionality with comparable effort using another language.
But please don't take (only) my word for that ;-)

@leofang
Copy link
Contributor

leofang commented Nov 22, 2020

It is worth noting that CuPy deliberately deviates from NumPy's behavior precisely because of this upcast concern:
https://docs.cupy.dev/en/stable/reference/fft.html#code-compatibility-features
But we can do this easily simply because cuFFT has provided all the functionalities that we need, and the code explosion is minimal.

@crisluengo
Copy link

I’m happy to show how to use the C preprocessor to generate code for 4 types from one template. It is surprisingly simple once you know the tricks.

@WarrenWeckesser
Copy link
Member

xref older issue: #6012

@asmeurer
Copy link
Member

The array API requires fft functions to return outputs with the same precision as the input. Given that NumPy is making breaking changes in NumPy 2.0 for array API compatibility (#25076), it would be good to get this fixed for 2.0.

Ever since #25536, fft functions are implemented under the hood with ufuncs, although the top-level functions are still pure Python wrappers that call the np.fft._pocketfft_umath.fft ufunc, since multiple functions and keyword arguments use the same underlying ufunc.

I'm planning to make a PR to fix this, but I want to know what the best approach to fixing this is.

It sounds like the ideal fix would be updating pocketfft, but I also suspect that would be the most difficult to do. Given how close the 2.0 branch date is, would it be better to just manually downcast to get the API breakage in, and update the underlying implementation later? A potential downside with that is that the single precision pocketfft implementation might return different results than a downcasted double precision one, so putting this off would result in a second smaller breakage with different results in NumPy 2.1.

If we do just want to manually downcast for now, is it better to do that in the ufunc by adding an 'F' type signature, or to just do it in the Python wrappers?

Or if people think updating pocketfft would be doable, I can go for that. I think scipy's implementation is already templated C++ that supports single and double precision. What is the status of C++ in NumPy?

CC @mhvk @rgommers

@charris
Copy link
Member

charris commented Jan 26, 2024

NumPy uses C++ these days, so that is not a problem. I'm curious if the fft interfaces are the same in NumPy and SciPy apart from the support for 32 bits? In any case, downcasting to get compatibility with the array_api seems fine to me, we could bring then bring the SciPy version over in 2.1.0 if it is decided that it is too late for 2.0.0.

@mhvk
Copy link
Contributor

mhvk commented Jan 26, 2024

@asmeurer - updating pocketfft to the C++ version was definitely the plan, but I haven't yet looked at it to know whether the interface is easily compatible - though I suspect it is. I did already ensure the pocketfft piece is separated out as much as possible.

In principle, with the ufuncs, it is easy to downcast and for doing large stacks of arrays the memory overhead is not too large.

Anyway, would be happy to help!

@asmeurer
Copy link
Member

@mhvk just to be clear, you think downcasting in the ufunc or in the Python wrappers is better?

@mreineck
Copy link
Contributor

My feeling is that switching to C++ pocketfft on a short timescale would be quite ambitious, but on the other hand

  • the major version switch to numpy 2.0 might be the ideal time for such a change
  • there could be a big additional benefit: assuming that numpy adopts the full pocketfft functionality (including DCT/DST etc., perhaps without giving them a public interface yet), future versions of scipy that require at least numpy 2.0 will be able to ship without their own pocketfft copy, which removes a lot of redundancy and reduces binary size.

So it would be great (IMO), if the circumstances allow it.

@mhvk
Copy link
Contributor

mhvk commented Jan 28, 2024

See #25711 for what hopefully is a fix - but please check!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants