-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
ENH: ufunc helper for variance #13263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I'm not sure I understand why the test fails on windows. On a windows machine dtype 'g' gives me a float64 whereas on a Linux machine I get float128 |
Not sure what the question is, but for MSVC, |
@charris thx for your reply The real issue is that the test fails (only on windows) with the following error: |
I don't know if it's related, but we've had terrible troubles with something similar here: nipy/nibabel#665 |
Tagging in #12096. I guess it's good to see that it's not just us... |
91883c1
to
719660f
Compare
@eric-wieser EDIT @matthew-brett @effigies |
@eric-wieser |
Looking at the test failures, this is due to EDIT: To be honest, I tend towards just adapting the test or mark (that one part) as known-fail on windows. These type of "inconsistencies" exist for many other ufuncs just the same if you use long double on windows. |
Hey there, I was just wondering what the status of this feature was -- is this still being worked on? It would be a welcome change, since we're looking to use it as a significant improvement for a downstream task in scikit-learn :) |
Not really. I'm not too happy with the code and it's failing of few tests on windows. |
@0x0L I'll close the PR since you are not planning on continuing the work. Thank you for giving it a go! |
Hello
This is a PR associated with #13199. Might also fix #9631
It implements a two-pass mean variance algorithm with an error correction term (see discussion in #13199).
It's more of a hack than a perfect solution: instead of allocating one array for the demeaned input, we allocate two arrays (one if we drop the bias correction term) with only a single dimension already reduced. This should never be worse than before but could be as bad.