Skip to content

[MRG+1] Add Brier_score_loss to model validation doc #6841

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

moutai
Copy link
Contributor

@moutai moutai commented May 29, 2016

Reference Issue

Add Brier_score_loss to model validation doc #4804

What does this implement/fix? Explain your changes.

Trying to fix the travis build with ellipsis in the docs

@moutai moutai force-pushed the brier_model_evaluation_docs branch from 2e1e031 to 20ff083 Compare May 29, 2016 21:28
@agramfort
Copy link
Member

LGTM

@agramfort agramfort changed the title Add Brier_score_loss to model validation doc [MRG+1] Add Brier_score_loss to model validation doc May 31, 2016
>>> y_prob = np.array([0.1, 0.9, 0.8, 0.4])
>>> y_pred = np.array([0, 1, 1, 0])
>>> brier_score_loss(y_true, y_prob) # doctest: +ELLIPSIS
0.055
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need # doctest: +ELLIPSIS if you don't use the ... placeholder in the string representation of the result.

@ogrisel
Copy link
Member

ogrisel commented Jun 3, 2016

Please also add a reference to the Wikipedia article that gives more interesting details.

@moutai
Copy link
Contributor Author

moutai commented Jun 6, 2016

@ogrisel thanks for the feedback, added the explanation of the why of the metrics, reference and the fix to the doctest. Let me me know what you think. thanks!

(the mean square difference is smaller), the more accurate the prediction is.
It can be thought of as a measure of the "calibration" of a set of probabilistic predictions.
`Here <ttp://docs.lib.noaa.gov/rescue/mwr/078/mwr-078-01-0001.pdf>`_ is
the paper describing it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of putting an inline link with "Here", please use the standard "Reference" section format used consistently throughout the scikit-learn documentation.

@ogrisel
Copy link
Member

ogrisel commented Jun 7, 2016

Did you understand why those doctests were not run?

@ogrisel
Copy link
Member

ogrisel commented Jun 7, 2016

Did you understand why those doctests were not run?

Apparently the travis tests did not run at all on your first commit for some reason, only the appveyor tests did pass but those do not run the doctests under windows intentionally.

Anyway travis ran the tests in doc/model_selection.rst in your last commit so this is fine. Please just address by other comments.

@moutai
Copy link
Contributor Author

moutai commented Jun 13, 2016

@ogrisel
Not sure why the doctests were not run on the original commit. They run fine now on every commit.
Thanks for the feedback, added the requested fixes, let me know if there is something else to add/remove.

BS = \frac{1}{N} \sum_{t=1}^{N}(f_t - o_t)^2

where : :math:`N` is the total number of predictions, :math:`f_t` is the predicted
probablity of the actual outcome :math:`o_t`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a detail, but I prefer if, before a code block, you add "::", which is the explicit way of defining a code-block, rather than relying on indentation, which is fragile.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I will do that change and squash/merge.

@ogrisel
Copy link
Member

ogrisel commented Jun 16, 2016

Merged by rebase as: c6a457f

Thanks @moutai.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants