Skip to content

Commit 53e0230

Browse files
committed
Rename pinball_loss to mean_pinball_loss
1 parent bc1882a commit 53e0230

File tree

9 files changed

+68
-64
lines changed

9 files changed

+68
-64
lines changed

doc/modules/classes.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -991,7 +991,7 @@ details.
991991
metrics.mean_poisson_deviance
992992
metrics.mean_gamma_deviance
993993
metrics.mean_tweedie_deviance
994-
metrics.pinball_loss
994+
metrics.mean_pinball_loss
995995

996996
Multilabel ranking metrics
997997
--------------------------

doc/modules/model_evaluation.rst

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1961,7 +1961,7 @@ The :mod:`sklearn.metrics` module implements several loss, score, and utility
19611961
functions to measure regression performance. Some of those have been enhanced
19621962
to handle the multioutput case: :func:`mean_squared_error`,
19631963
:func:`mean_absolute_error`, :func:`explained_variance_score`,
1964-
:func:`r2_score` and :func:`pinball_loss`.
1964+
:func:`r2_score` and :func:`mean_pinball_loss`.
19651965

19661966

19671967
These functions have an ``multioutput`` keyword argument which specifies the
@@ -2359,38 +2359,38 @@ sensitive to relative errors.
23592359
Pinball loss
23602360
------------
23612361

2362-
The :func:`pinball_loss` function is mostly used to evaluate the predictive
2363-
performance of quantile regression models. The `pinball loss
2364-
<https://en.wikipedia.org/wiki/Quantile_regression#Computation>`_ is
2365-
equivalent to :func:`mean_absolute_error` when the quantile parameter ``alpha``
2366-
is set to 0.5.
2362+
The :func:`mean_pinball_loss` function is mostly used to evaluate the
2363+
predictive performance of quantile regression models. The `pinball loss
2364+
<https://en.wikipedia.org/wiki/Quantile_regression#Computation>`_ is equivalent
2365+
to :func:`mean_absolute_error` when the quantile parameter ``alpha`` is set to
2366+
0.5.
23672367

23682368
.. math::
23692369
23702370
\text{pinball}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \alpha \max(y_i - \hat{y}_i, 0) + (1 - \alpha) \max(\hat{y}_i - y_i, 0)
23712371

2372-
Here is a small example of usage of the :func:`pinball_loss` function::
2372+
Here is a small example of usage of the :func:`mean_pinball_loss` function::
23732373

2374-
>>> from sklearn.metrics import pinball_loss
2374+
>>> from sklearn.metrics import mean_pinball_loss
23752375
>>> y_true = [1, 2, 3]
2376-
>>> pinball_loss(y_true, [0, 2, 3], alpha=0.1)
2376+
>>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.1)
23772377
0.03...
2378-
>>> pinball_loss(y_true, [1, 2, 4], alpha=0.1)
2378+
>>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.1)
23792379
0.3...
2380-
>>> pinball_loss(y_true, [0, 2, 3], alpha=0.9)
2380+
>>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.9)
23812381
0.3...
2382-
>>> pinball_loss(y_true, [1, 2, 4], alpha=0.9)
2382+
>>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.9)
23832383
0.03...
2384-
>>> pinball_loss(y_true, y_true, alpha=0.1)
2384+
>>> mean_pinball_loss(y_true, y_true, alpha=0.1)
23852385
0.0
2386-
>>> pinball_loss(y_true, y_true, alpha=0.9)
2386+
>>> mean_pinball_loss(y_true, y_true, alpha=0.9)
23872387
0.0
23882388

23892389
It is possible to build a scorer object with a specific choice of alpha to
23902390
perform, for instance to evaluate a regressor of the 95th percentile::
23912391

23922392
>>> from sklearn.metrics import make_scorer
2393-
>>> pinball_loss_95p = make_scorer(pinball_loss, alpha=0.95)
2393+
>>> mean_pinball_loss_95p = make_scorer(mean_pinball_loss, alpha=0.95)
23942394

23952395
Such a scorer can be used in a cross-validation loop:
23962396

@@ -2404,7 +2404,7 @@ Such a scorer can be used in a cross-validation loop:
24042404
... alpha=0.95,
24052405
... random_state=0,
24062406
... )
2407-
>>> cross_val_score(estimator, X, y, cv=5, scoring=pinball_loss_95p)
2407+
>>> cross_val_score(estimator, X, y, cv=5, scoring=mean_pinball_loss_95p)
24082408
array([11.1..., 10.4... , 24.4..., 9.2..., 12.9...])
24092409

24102410
It is also possible to build scorer objects for hyper-parameter tuning, in

doc/whats_new/v1.0.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -134,9 +134,9 @@ Changelog
134134
class methods and will be removed in 1.2.
135135
:pr:`18543` by `Guillaume Lemaitre`_.
136136

137-
- |Feature| :func:`metrics.pinball_loss` exposes the pinball loss for
138-
quantile regression. :pr:`19415`
139-
by :user:`Xavier Dupré <sdpython>` and :user:`Oliver Grisel <ogrisel>`.
137+
- |Feature| :func:`metrics.mean_pinball_loss` exposes the pinball loss for
138+
quantile regression. :pr:`19415` by :user:`Xavier Dupré <sdpython>`
139+
and :user:`Oliver Grisel <ogrisel>`.
140140

141141
:mod:`sklearn.naive_bayes`
142142
..........................

examples/ensemble/plot_gradient_boosting_quantile.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def f(x):
5454
# average, there should be the same number of target observations above and
5555
# below the predicted values.
5656
from sklearn.ensemble import GradientBoostingRegressor
57-
from sklearn.metrics import pinball_loss, mean_squared_error
57+
from sklearn.metrics import mean_pinball_loss, mean_squared_error
5858

5959

6060
all_models = {}
@@ -122,8 +122,8 @@ def f(x):
122122
# Analysis of the error metrics
123123
# -----------------------------
124124
#
125-
# Measure the models with :func:`mean_squared_error` and :func:`pinball_loss`
126-
# metrics on the training dataset.
125+
# Measure the models with :func:`mean_squared_error` and
126+
# :func:`mean_pinball_loss` metrics on the training dataset.
127127
import pandas as pd
128128

129129

@@ -138,7 +138,7 @@ def highlight_min(x):
138138
metrics = {'model': name}
139139
y_pred = gbr.predict(X_train)
140140
for alpha in [0.05, 0.5, 0.95]:
141-
metrics["pbl=%1.2f" % alpha] = pinball_loss(
141+
metrics["pbl=%1.2f" % alpha] = mean_pinball_loss(
142142
y_train, y_pred, alpha=alpha)
143143
metrics['MSE'] = mean_squared_error(y_train, y_pred)
144144
results.append(metrics)
@@ -166,7 +166,7 @@ def highlight_min(x):
166166
metrics = {'model': name}
167167
y_pred = gbr.predict(X_test)
168168
for alpha in [0.05, 0.5, 0.95]:
169-
metrics["pbl=%1.2f" % alpha] = pinball_loss(
169+
metrics["pbl=%1.2f" % alpha] = mean_pinball_loss(
170170
y_test, y_pred, alpha=alpha)
171171
metrics['MSE'] = mean_squared_error(y_test, y_pred)
172172
results.append(metrics)
@@ -244,8 +244,8 @@ def coverage_fraction(y, y_low, y_high):
244244
min_samples_split=[2, 5, 10, 20, 30, 50],
245245
)
246246
alpha = 0.05
247-
neg_pinball_loss_05p_scorer = make_scorer(
248-
pinball_loss,
247+
neg_mean_pinball_loss_05p_scorer = make_scorer(
248+
mean_pinball_loss,
249249
alpha=alpha,
250250
greater_is_better=False, # maximize the negative loss
251251
)
@@ -254,7 +254,7 @@ def coverage_fraction(y, y_low, y_high):
254254
gbr,
255255
param_grid,
256256
n_iter=10, # increase this if computational budget allows
257-
scoring=neg_pinball_loss_05p_scorer,
257+
scoring=neg_mean_pinball_loss_05p_scorer,
258258
n_jobs=2,
259259
random_state=0,
260260
).fit(X_train, y_train)
@@ -272,14 +272,14 @@ def coverage_fraction(y, y_low, y_high):
272272
from sklearn.base import clone
273273

274274
alpha = 0.95
275-
neg_pinball_loss_95p_scorer = make_scorer(
276-
pinball_loss,
275+
neg_mean_pinball_loss_95p_scorer = make_scorer(
276+
mean_pinball_loss,
277277
alpha=alpha,
278278
greater_is_better=False, # maximize the negative loss
279279
)
280280
search_95p = clone(search_05p).set_params(
281281
estimator__alpha=alpha,
282-
scoring=neg_pinball_loss_95p_scorer,
282+
scoring=neg_mean_pinball_loss_95p_scorer,
283283
)
284284
search_95p.fit(X_train, y_train)
285285
pprint(search_95p.best_params_)

sklearn/ensemble/tests/test_gradient_boosting_loss_functions.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from pytest import approx
99

1010
from sklearn.utils import check_random_state
11-
from sklearn.metrics import pinball_loss
11+
from sklearn.metrics import mean_pinball_loss
1212
from sklearn.ensemble._gb_losses import RegressionLossFunction
1313
from sklearn.ensemble._gb_losses import LeastSquaresError
1414
from sklearn.ensemble._gb_losses import LeastAbsoluteError
@@ -116,9 +116,10 @@ def test_quantile_loss_function():
116116
y_found = QuantileLossFunction(0.9)(x, np.zeros_like(x))
117117
y_expected = np.asarray([0.1, 0.0, 0.9]).mean()
118118
np.testing.assert_allclose(y_found, y_expected)
119-
y_found_p = pinball_loss(x, np.zeros_like(x), alpha=0.9)
119+
y_found_p = mean_pinball_loss(x, np.zeros_like(x), alpha=0.9)
120120
np.testing.assert_allclose(y_found, y_found_p)
121121

122+
122123
def test_sample_weight_deviance():
123124
# Test if deviance supports sample weights.
124125
rng = check_random_state(13)
@@ -316,7 +317,7 @@ def test_lad_equals_quantiles(seed, alpha):
316317
ql_weighted_loss = ql(y_true, raw_predictions, sample_weight=weights)
317318
if alpha == 0.5:
318319
assert lad_weighted_loss == approx(2 * ql_weighted_loss)
319-
pbl_weighted_loss = pinball_loss(y_true, raw_predictions,
320-
sample_weight=weights,
321-
alpha=alpha)
320+
pbl_weighted_loss = mean_pinball_loss(y_true, raw_predictions,
321+
sample_weight=weights,
322+
alpha=alpha)
322323
assert pbl_weighted_loss == approx(ql_weighted_loss)

sklearn/metrics/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
from ._regression import mean_squared_log_error
7070
from ._regression import median_absolute_error
7171
from ._regression import mean_absolute_percentage_error
72-
from ._regression import pinball_loss
72+
from ._regression import mean_pinball_loss
7373
from ._regression import r2_score
7474
from ._regression import mean_tweedie_deviance
7575
from ._regression import mean_poisson_deviance
@@ -134,6 +134,7 @@
134134
'mean_absolute_error',
135135
'mean_squared_error',
136136
'mean_squared_log_error',
137+
'mean_pinball_loss',
137138
'mean_poisson_deviance',
138139
'mean_gamma_deviance',
139140
'mean_tweedie_deviance',
@@ -153,7 +154,6 @@
153154
'plot_det_curve',
154155
'plot_precision_recall_curve',
155156
'plot_roc_curve',
156-
'pinball_loss',
157157
'PrecisionRecallDisplay',
158158
'precision_recall_curve',
159159
'precision_recall_fscore_support',

sklearn/metrics/_regression.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
"mean_squared_log_error",
4444
"median_absolute_error",
4545
"mean_absolute_percentage_error",
46-
"pinball_loss",
46+
"mean_pinball_loss",
4747
"r2_score",
4848
"explained_variance_score",
4949
"mean_tweedie_deviance",
@@ -195,10 +195,10 @@ def mean_absolute_error(y_true, y_pred, *,
195195
return np.average(output_errors, weights=multioutput)
196196

197197

198-
def pinball_loss(y_true, y_pred, *,
199-
sample_weight=None,
200-
alpha=0.5,
201-
multioutput='uniform_average'):
198+
def mean_pinball_loss(y_true, y_pred, *,
199+
sample_weight=None,
200+
alpha=0.5,
201+
multioutput='uniform_average'):
202202
"""Pinball loss for quantile regression.
203203
204204
Read more in the :ref:`User Guide <pinball_loss>`.
@@ -241,14 +241,14 @@ def pinball_loss(y_true, y_pred, *,
241241
242242
Examples
243243
--------
244-
>>> from sklearn.metrics import pinball_loss
244+
>>> from sklearn.metrics import mean_pinball_loss
245245
>>> y_true = [3, -0.5, 2, 7]
246246
>>> y_pred = [2.5, 0.0, 2, 8]
247-
>>> pinball_loss(y_true, y_pred)
247+
>>> mean_pinball_loss(y_true, y_pred)
248248
0.25
249249
>>> y_true = [3, -0.5, 2, 7]
250250
>>> y_pred = [2.5, 0.0, 2, 8]
251-
>>> pinball_loss(y_true, y_pred, alpha=0.1)
251+
>>> mean_pinball_loss(y_true, y_pred, alpha=0.1)
252252
0.35
253253
"""
254254
y_type, y_true, y_pred, multioutput = _check_reg_targets(

sklearn/metrics/tests/test_common.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
from sklearn.metrics import mean_gamma_deviance
5151
from sklearn.metrics import median_absolute_error
5252
from sklearn.metrics import multilabel_confusion_matrix
53-
from sklearn.metrics import pinball_loss
53+
from sklearn.metrics import mean_pinball_loss
5454
from sklearn.metrics import precision_recall_curve
5555
from sklearn.metrics import precision_score
5656
from sklearn.metrics import r2_score
@@ -102,7 +102,7 @@
102102
"max_error": max_error,
103103
"mean_absolute_error": mean_absolute_error,
104104
"mean_squared_error": mean_squared_error,
105-
"pinball_loss": pinball_loss,
105+
"mean_pinball_loss": mean_pinball_loss,
106106
"median_absolute_error": median_absolute_error,
107107
"mean_absolute_percentage_error": mean_absolute_percentage_error,
108108
"explained_variance_score": explained_variance_score,
@@ -440,7 +440,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
440440
MULTIOUTPUT_METRICS = {
441441
"mean_absolute_error", "median_absolute_error", "mean_squared_error",
442442
"r2_score", "explained_variance_score", "mean_absolute_percentage_error",
443-
"pinball_loss"
443+
"mean_pinball_loss"
444444
}
445445

446446
# Symmetric with respect to their input arguments y_true and y_pred
@@ -461,7 +461,10 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
461461
"micro_precision_score", "micro_recall_score",
462462

463463
"matthews_corrcoef_score", "mean_absolute_error", "mean_squared_error",
464-
"median_absolute_error", "max_error", "pinball_loss",
464+
"median_absolute_error", "max_error",
465+
466+
# Pinball loss is only symmetric for alpha=0.5 which is the default.
467+
"mean_pinball_loss",
465468

466469
"cohen_kappa_score", "mean_normal_deviance"
467470
}

0 commit comments

Comments
 (0)