Skip to content

Ambiguity in brier score doc fixed #10969

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
May 23, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions doc/modules/calibration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,9 +109,12 @@ classification with 100.000 samples (1.000 of them are used for model fitting)
with 20 features. Of the 20 features, only 2 are informative and 10 are
redundant. The figure shows the estimated probabilities obtained with
logistic regression, a linear support-vector classifier (SVC), and linear SVC with
both isotonic calibration and sigmoid calibration. The calibration performance
is evaluated with Brier score :func:`brier_score_loss`, reported in the legend
(the smaller the better).
both isotonic calibration and sigmoid calibration.
The Brier score is a metric which is a combination of calibration loss and refinement loss,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no longer grammatical. Try "Performance is evaluated with ...." then adding a new sentence summarising the intention of the metric.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're not explaining calibration and refinement loss, right?

:func:`brier_score_loss`, reported in the legend (the smaller the better).
Calibration loss is defined as the mean squared deviation from empirical probabilities
derived from the slope of ROC segments. Refinement loss can be defined as the expected
optimal loss as measured by the area under the optimal cost curve.

.. figure:: ../auto_examples/calibration/images/sphx_glr_plot_calibration_curve_002.png
:target: ../auto_examples/calibration/plot_calibration_curve.html
Expand Down
7 changes: 2 additions & 5 deletions sklearn/metrics/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -1918,9 +1918,7 @@ def _check_binary_probabilistic_predictions(y_true, y_prob):

def brier_score_loss(y_true, y_prob, sample_weight=None, pos_label=None):
"""Compute the Brier score.

The smaller the Brier score, the better, hence the naming with "loss".

Across all items in a set N predictions, the Brier score measures the
mean squared difference between (1) the predicted probability assigned
to the possible outcomes for item i, and (2) the actual outcome.
Expand All @@ -1929,15 +1927,14 @@ def brier_score_loss(y_true, y_prob, sample_weight=None, pos_label=None):
takes on a value between zero and one, since this is the largest
possible difference between a predicted probability (which must be
between zero and one) and the actual outcome (which can take on values
of only 0 and 1).

of only 0 and 1). The Brier loss is composed of refinement loss and
calibration loss.
The Brier score is appropriate for binary and categorical outcomes that
can be structured as true or false, but is inappropriate for ordinal
variables which can take on three or more values (this is because the
Brier score assumes that all possible outcomes are equivalently
"distant" from one another). Which label is considered to be the positive
label is controlled via the parameter pos_label, which defaults to 1.

Read more in the :ref:`User Guide <calibration>`.

Parameters
Expand Down