Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 53 additions & 4 deletions doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -920,6 +920,13 @@ variance is estimated as follow:

The best possible score is 1.0, lower values are worse.

The :func:`explained_variance_score` function has an `output_weights` keyword
with two possible values `None` and 'uniform'. If the value provided is `None`,
then the explained variance score is calculated for each dimension separately
and a numpy array is returned. If the value given is `uniform`, the
explained variance error is averaged over each dimension with a weight of
`1 / n_outputs`.

Here a small example of usage of the :func:`explained_variance_score`
function::

Expand All @@ -928,6 +935,14 @@ function::
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred) # doctest: +ELLIPSIS
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, output_weights=None)
... # doctest: +ELLIPSIS
array([ 0.967..., 1. ])
>>> explained_variance_score(y_true, y_pred, output_weights=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.990...

Mean absolute error
...................
Expand All @@ -945,6 +960,13 @@ and :math:`y_i` is the corresponding true value, then the mean absolute error

\text{MAE}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \left| y_i - \hat{y}_i \right|.

The :func:`mean_absolute_error` function has an `output_weights` keyword
with two possible values `None` and 'uniform'. If the value provided is
`None`, then the mean absolute error is calculated for each dimension
separately and a numpy array is returned. If the value given is `uniform`, the
mean absolute error is averaged over each dimension with a weight of
`1 / n_outputs`.

Here a small example of usage of the :func:`mean_absolute_error` function::

>>> from sklearn.metrics import mean_absolute_error
Expand All @@ -956,8 +978,11 @@ Here a small example of usage of the :func:`mean_absolute_error` function::
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75


>>> mean_absolute_error(y_true, y_pred, output_weights=None)
array([ 0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, output_weights=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.849...

Mean squared error
...................
Expand All @@ -975,6 +1000,13 @@ and :math:`y_i` is the corresponding true value, then the mean squared error

\text{MSE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (y_i - \hat{y}_i)^2.

The :func:`mean_squared_error` function has an `output_weights` keyword
with two possible values `None` and 'uniform'. If the value provided is
`None`, then the mean squared error is calculated for each dimension
separately and a numpy array is returned. If the value given is `uniform`, the
mean squared error is averaged over each dimension with a weight of
`1 / n_outputs`.

Here a small example of usage of the :func:`mean_squared_error`
function::

Expand All @@ -987,6 +1019,12 @@ function::
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_squared_error(y_true, y_pred) # doctest: +ELLIPSIS
0.7083...
>>> mean_squared_error(y_true, y_pred, output_weights=None)
... # doctest: +ELLIPSIS
array([ 0.416..., 1. ])
>>> mean_squared_error(y_true, y_pred, output_weights=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.824...

.. topic:: Examples:

Expand All @@ -1012,6 +1050,12 @@ over :math:`n_{\text{samples}}` is defined as

where :math:`\bar{y} = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}} - 1} y_i`.

The :func:`r2_score` function has an `output_weights` keyword with two possible
values `None` and 'uniform'. If the value provided is `None`, then the r2 score
is calculated for each dimension separately and a numpy array is returned.
If the value given is `uniform`, the r2 score is averaged over each dimension
with a weight of `1 / n_outputs`.

Here a small example of usage of the :func:`r2_score` function::

>>> from sklearn.metrics import r2_score
Expand All @@ -1022,8 +1066,13 @@ Here a small example of usage of the :func:`r2_score` function::
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred) # doctest: +ELLIPSIS
0.938...

0.936...
>>> r2_score(y_true, y_pred, output_weights=None)
... # doctest: +ELLIPSIS
array([ 0.965..., 0.908...])
>>> r2_score(y_true, y_pred, output_weights=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.925...

.. topic:: Example:

Expand Down
Loading