Closed
Description
Related to #11000 it would be good to add support for 32 bit computations for estimators in the neural_network
module. This was done for BernoulliRBM
in #16352 Because performance is bound by the dot product this is going to have a large impact (cf #17641 (comment))
I would even argue that unlike other models, it could make sense to add a dtype=np.float32
parameter and make calculations in 32 bit by default regardless of X.dtypes
. We could also consider supporting dtype=np.float16
.