You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
True. I guess it's not really an error metric but more so its further analysis on the average distribution of the errors (bias is probably a better term for np.average(y_true - y_pred)?).
Apologies if this has been brought up before. I did have a quick check of https://github.com/scikit-learn/scikit-learn/issues?q=is%3Aissue+sort%3Aupdated-desc+mean+error+is%3Aclosed+label%3Amodule%3Ametrics
Describe the workflow you want to enable
I am interested in the sign of the error and if possible I would rather do it in scikit-learn than numpy.
mean_absolute_error(y_true, y_pred, absolute=False)
to returnnp.average(y_pred - y_true)
mean_absolute_error(y_true, y_pred)
returns same as beforeDescribe your proposed solution
Adding the
absolute=False
will update https://github.com/scikit-learn/scikit-learn/blob/fd237278e/sklearn/metrics/_regression.py#L122L190 to beDescribe alternatives you've considered, if relevant
Not sure if a new metric is needed i.e.
mean_error
.I believe this approach of a new argument was applied to
mean_squared_error
(squared=True
). See #12895Additional context
If this is of interest I'll be happy to work on this during the scipy sprint.
The text was updated successfully, but these errors were encountered: