You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the following list of metrics will raise an error if str are provided as labels and that pos_label is not defined and needed to compute the metrics:
average_precision_score
f1_score
fbeta_score
jaccard_score
precision_recall_curve
precision_score
recall_score
roc_curve
We try to make sure that the error message is consistent in #18192
brier_score_loss should supposedly follow the behaviour than the other metrics. However, it seems that up-to-now, it does some inference when labels are string.
Thus, 2 questions can be raised:
Is it normal that brier_score_loss can do such inference or should it raise a similar error as other metrics?
If it should raise an error, shall we open consider as a bug fix or make a deprecation cycle?
The text was updated successfully, but these errors were encountered:
Related: #10010 - brier_score_loss is (now) also the only one using default pos_label=None (instead of pos_label=1). It also behaves differently for int label values than the metrics mentioned above:
# if pos_label=None, when y_true is in {-1, 1} or {0, 1},
# pos_label is set to 1 (consistent with precision_recall_curve/roc_curve),
# otherwise pos_label is set to the greater label
# (different from precision_recall_curve/roc_curve,
# the purpose is to keep backward compatibility).
I believe we should go through a deprecation cycle (and update the docstring accordingly) to make it consistent with the others. I see no reason why Brier score would be special.
Currently, the following list of metrics will raise an error if
str
are provided as labels and thatpos_label
is not defined and needed to compute the metrics:average_precision_score
f1_score
fbeta_score
jaccard_score
precision_recall_curve
precision_score
recall_score
roc_curve
We try to make sure that the error message is consistent in #18192
brier_score_loss
should supposedly follow the behaviour than the other metrics. However, it seems that up-to-now, it does some inference when labels are string.Thus, 2 questions can be raised:
brier_score_loss
can do such inference or should it raise a similar error as other metrics?The text was updated successfully, but these errors were encountered: