Skip to content

DEBUG: Buffer dtype mismatch, expected 'const float' but got 'double' #30144

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ogrisel
Copy link
Member

@ogrisel ogrisel commented Oct 24, 2024

Trying to isolate a low level Cython error when calling in loss_gradient from the check_classifiers_train estimator check with memory mapped inputs.

Originally discovered in #30143 (comment).

@ogrisel ogrisel marked this pull request as draft October 24, 2024 19:42
Copy link

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: c7bcba0. Link to the linter CI: here

@ogrisel
Copy link
Member Author

ogrisel commented Oct 24, 2024

So it can be reproduced on some CI environments but could not reproduce on my dev env.

@ogrisel
Copy link
Member Author

ogrisel commented Oct 24, 2024

If someone wants to investigate the problem, please feel free to open a PR ;)

@lorentzenchr
Copy link
Member

My guess:

  • The NewtonCholeskySolver is called with float32 input.
  • It falls back to lbfgs.
  • lbfgs only works with float64 and casts/promotes some arrays from float32 to float64.
  • Some validation logic flaw in loss.py, in particular loss_gradient, cause the case of mixed dtype arrays to not being catched.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants