@@ -1961,7 +1961,7 @@ The :mod:`sklearn.metrics` module implements several loss, score, and utility
1961
1961
functions to measure regression performance. Some of those have been enhanced
1962
1962
to handle the multioutput case: :func: `mean_squared_error `,
1963
1963
:func: `mean_absolute_error `, :func: `explained_variance_score `,
1964
- :func: `r2_score ` and :func: `pinball_loss `.
1964
+ :func: `r2_score ` and :func: `mean_pinball_loss `.
1965
1965
1966
1966
1967
1967
These functions have an ``multioutput `` keyword argument which specifies the
@@ -2359,38 +2359,38 @@ sensitive to relative errors.
2359
2359
Pinball loss
2360
2360
------------
2361
2361
2362
- The :func: `pinball_loss ` function is mostly used to evaluate the predictive
2363
- performance of quantile regression models. The `pinball loss
2364
- <https://en.wikipedia.org/wiki/Quantile_regression#Computation> `_ is
2365
- equivalent to :func: `mean_absolute_error ` when the quantile parameter ``alpha ``
2366
- is set to 0.5.
2362
+ The :func: `mean_pinball_loss ` function is mostly used to evaluate the
2363
+ predictive performance of quantile regression models. The `pinball loss
2364
+ <https://en.wikipedia.org/wiki/Quantile_regression#Computation> `_ is equivalent
2365
+ to :func: `mean_absolute_error ` when the quantile parameter ``alpha `` is set to
2366
+ 0.5.
2367
2367
2368
2368
.. math ::
2369
2369
2370
2370
\t ext{pinball}(y, \h at{y}) = \f rac{1}{n_{\t ext{samples}}} \s um_{i=0}^{n_{\t ext{samples}}-1} \a lpha \m ax(y_i - \h at{y}_i, 0) + (1 - \a lpha) \m ax(\h at{y}_i - y_i, 0)
2371
2371
2372
- Here is a small example of usage of the :func: `pinball_loss ` function::
2372
+ Here is a small example of usage of the :func: `mean_pinball_loss ` function::
2373
2373
2374
- >>> from sklearn.metrics import pinball_loss
2374
+ >>> from sklearn.metrics import mean_pinball_loss
2375
2375
>>> y_true = [1, 2, 3]
2376
- >>> pinball_loss (y_true, [0, 2, 3], alpha=0.1)
2376
+ >>> mean_pinball_loss (y_true, [0, 2, 3], alpha=0.1)
2377
2377
0.03...
2378
- >>> pinball_loss (y_true, [1, 2, 4], alpha=0.1)
2378
+ >>> mean_pinball_loss (y_true, [1, 2, 4], alpha=0.1)
2379
2379
0.3...
2380
- >>> pinball_loss (y_true, [0, 2, 3], alpha=0.9)
2380
+ >>> mean_pinball_loss (y_true, [0, 2, 3], alpha=0.9)
2381
2381
0.3...
2382
- >>> pinball_loss (y_true, [1, 2, 4], alpha=0.9)
2382
+ >>> mean_pinball_loss (y_true, [1, 2, 4], alpha=0.9)
2383
2383
0.03...
2384
- >>> pinball_loss (y_true, y_true, alpha=0.1)
2384
+ >>> mean_pinball_loss (y_true, y_true, alpha=0.1)
2385
2385
0.0
2386
- >>> pinball_loss (y_true, y_true, alpha=0.9)
2386
+ >>> mean_pinball_loss (y_true, y_true, alpha=0.9)
2387
2387
0.0
2388
2388
2389
2389
It is possible to build a scorer object with a specific choice of alpha to
2390
2390
perform, for instance to evaluate a regressor of the 95th percentile::
2391
2391
2392
2392
>>> from sklearn.metrics import make_scorer
2393
- >>> pinball_loss_95p = make_scorer(pinball_loss , alpha=0.95)
2393
+ >>> mean_pinball_loss_95p = make_scorer(mean_pinball_loss , alpha=0.95)
2394
2394
2395
2395
Such a scorer can be used in a cross-validation loop:
2396
2396
@@ -2404,7 +2404,7 @@ Such a scorer can be used in a cross-validation loop:
2404
2404
... alpha= 0.95 ,
2405
2405
... random_state= 0 ,
2406
2406
... )
2407
- >>> cross_val_score(estimator, X, y, cv = 5 , scoring = pinball_loss_95p )
2407
+ >>> cross_val_score(estimator, X, y, cv = 5 , scoring = mean_pinball_loss_95p )
2408
2408
array([11.1..., 10.4... , 24.4..., 9.2..., 12.9...])
2409
2409
2410
2410
It is also possible to build scorer objects for hyper-parameter tuning, in
0 commit comments