-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Make _weighted_percentile more robust #6189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@mrecachinas I need to look more deeply into that but it doesn't seem to touch |
This addresses scikit-learn#6189. It follows [the Weighted Percentile method](https://en.wikipedia.org/wiki/Percentile#The_Weighted_Percentile_method) from Wikipedia. An example that this addresses is ``` y_true = [0, 1] weights = [1, 1] _weighted_percentile(y_true, weights, 50) \# before: output ==> 0 \# after: output ==> 0.5 ```
Another way for this feature is if numpy/numpy#9211 gets merged. |
@lorentzenchr I just found this old issue. |
Actually, this is a non-issue: import numpy as np
np.quantile(y, 0.5, method="inverted_cdf") # 0 results in from sklearn.metrics import mean_absolute_error
mean_absolute_error(y, [0, 0]) # 0.5
mean_absolute_error(y, [1, 1]) # 0.5 Also, each value |
As reported by @maniteja123
Do we want to do some sort of linear interpolation as described in this method?
https://en.wikipedia.org/wiki/Percentile#The_Weighted_Percentile_method
The text was updated successfully, but these errors were encountered: