Closed as not planned
Description
Describe the bug
Hi,
It seems that on the edge case when there are equal scores, roc_auc_score
actually makes a wrong comuptation:
>>> roc_auc_score([1, 0, 0, 1], [2, 5, 10, 10])
0.375
>>> # Expected 0.25 or 0.5, see below.
On would expect either 0.5
or 0.25
depending on whether or not the area is computed by interpolating to the convex hull (which makes sense with probabilistic mixtures for the ROC curve) or not.
I tried illustrating this example below (keep in mind that this behaviour is speculated by myself, I didn't go through any of sklearn
code).
Any feedback is welcome.
Cheers!
Steps/Code to Reproduce
from sklearn.metrics import roc_auc_score
roc_auc_score([1, 0, 0, 1], [2, 5, 10, 10])
Expected Results
0.25
or 0.5
Actual Results
0.375
Versions
1.5.0