-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Issue 1527: normalize option for zero-one loss #1534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Positive integer (number of misclassifications). The best performance | ||
is 0. | ||
Positive integer (number of misclassifications) or float (fraction of | ||
misclassifications). The best performance is 0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or float (fraction of misclassifications) when normalize=True. Other than that, +1 for merge.
Thank for the pull requestion. For the future I think it is good if the titile of the pull request is a bit more descriptive than just the issue number. These are a bit hard to remember ;) |
Oh, and can you please also a test for the new functionality? |
Returns | ||
------- | ||
loss : float | ||
|
||
""" | ||
y_true, y_pred = check_arrays(y_true, y_pred) | ||
return np.sum(y_pred != y_true) | ||
if normalize == False: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not normalize:
is better
Is it possible to wait a little before merging? By the way, an example would be nice in the docstring. |
appart from above comments, 👍 for merge |
@kyleabeauchamp As we discussed in #1512, don't wait for my pr . I'll update the narrative doc. |
For the example, you can use:
|
For the testing part, you can add and adapt existing tests in |
When you have finished, add your contribution in Thanks for the pr. :-) |
OK I think I integrated everyone's comments. I think it's ready to go. One note--the following doctest didn't work as written because of floating point representation:
I did some rounding to avoid FP ambiguity. |
instead of rounding, you should use |
Are you sure this is the best approach? The python docs (http://docs.python.org/2/library/doctest.html) seem to suggest that rounding is the best way to go. My concern with ELLPISIS is that it's a general regex capability, while rounding allows us to directly control the level of precision required. |
I would use rounding for tests and ellipsis for doctests, since in doctests the prime concern is for the example to be clear to the user, and using |
This is the usual way it is done in sklearn. |
>>> y_true = [0, 1, 2, 3, 4] | ||
>>> zero_one(y_true, y_pred) | ||
2 | ||
>>> round(zero_one(y_true, y_pred, normalize=True),5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we decide on sticking with round here, please add a space between the comma and the 5, this is not PEP8
@amueller we cross-posted... at least we agreed :) |
@vene yeah, saw that. I take that as a good sign ;) |
OK I agree that ELLIPSE is prettier for users. |
Instead of having round or ellipse, it might be better to change the example.
|
@@ -517,6 +519,10 @@ def test_symmetry(): | |||
# symmetric | |||
assert_equal(zero_one(y_true, y_pred), | |||
zero_one(y_pred, y_true)) | |||
|
|||
assert_almost_equal(zero_one(y_true, y_pred, normalize=True), | |||
zero_one(y_pred, y_true, normalize=True), 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pep8
sure that also works ;) |
I tend to agree with the comment of @vene on the sentence formulation. Except from the above comment, +1 for merge. Thanks for your quick responses @kyleabeauchamp !!! |
Looks good. +1 for merging as well. |
crazy amount of reviewers out there tonight. Sweet :) will merge by rebase! (pretty sure @vene won't mind ;) |
merged |
Thanks a lot @kyleabeauchamp, in particular for the immediate responses! 👍 |
@kyleabeauchamp When you added yourself to whatsnew.rst, you made your name a link, but didn't add a website. I removed the link for now. Give me a shout if you want to add your website. |
Oops--sorry about that, I copy pasted a previous line on whatsnew.rst On 01/16/2013 01:38 PM, Andreas Mueller wrote:
|
On 01/16/2013 10:41 PM, kyleabeauchamp wrote:
|
I think this is the easiest way to address this issue.