Skip to content

Add 32 bit support to neural_network module #17700

Closed
@rth

Description

@rth

Related to #11000 it would be good to add support for 32 bit computations for estimators in the neural_network module. This was done for BernoulliRBM in #16352 Because performance is bound by the dot product this is going to have a large impact (cf #17641 (comment))

I would even argue that unlike other models, it could make sense to add a dtype=np.float32 parameter and make calculations in 32 bit by default regardless of X.dtypes. We could also consider supporting dtype=np.float16.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions