-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG] Adding variable alphaCorrection to classes in naive_bayes.py. #16747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Adding variable alphaCorrection to classes in naive_bayes.py. #16747
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it strange that we should need two parameters to control this. As @amueller wrote in #10775 (comment), going back to just allowing small alpha but making sure there is a warning seems a reasonable way forward. On the other hand, we can't do it without breaking backwards compatibility. So I suppose we could use something akin to this PR as a temporary measure, but change the default of alphaCorrection to False over a deprecation period.
sklearn/naive_bayes.py
Outdated
Additive (Laplace/Lidstone) smoothing parameter | ||
(set alpha=0 and alphaCorrection=False, for no smoothing). | ||
|
||
alphaCorrection : bool, default=True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we only use camel case in class names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I change name to: alpha_correction or something like that?
I'm not sure if I don't understand or was misunderstood. Perhaps instead of Trying to set
On the other hand That means if user sets This is why I cannot understand changing default value of this variable to |
@jnothman what do You think? |
Merging changes from the main repository
@amueller what is Your opinion on this? |
Merging changes from the main repository
Hi @arka204, your code have a linting issue.
Hope this can help. |
Merging changes from the main repository
Update branch
Resolving conflicts
Thanks for your help @cmarmo ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well done kind sir
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise looking good
@@ -752,6 +752,17 @@ def test_alpha(): | |||
X, y, classes=[0, 1]) | |||
|
|||
|
|||
def test_check_alpha(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also test that appropriate warnings are raised in each case (use pytest.warns)
Please add an entry to the change log at |
…ka204/scikit-learn into alpha-close-or-equal-0-update
Alpha close or equal 0 update
Update master
…a-is-close-or-equal-0' into master-copy
Update branch
@arka204 will you be able to address comments and fix conflicts? Thanks for you work! |
Reference Issues/PRs
Fixes #10772
References PR #9131
References PR #10775
What does this implement/fix? Explain your changes.
This PR adds a new variable alphaCorrection in classes in naive_bayes.py, which is set to True by default and if set to False, then for alpha=0 (or greater, but still smaller than _ALPHA_MIN) alpha is not being rounded up to _ALPHA_MIN.
Any other comments?
I decided to propose a solution to a problem mentioned in issues above.
This makes possible for user to use functions with smaller alpha, even equal 0, which was earlier impossible, because after setting it to something smaller than _ALPHA_MIN it was changed to _ALPHA_MIN in _check_alpha(self).
By adding a new variable I hope to ensure that small (or equal 0) alpha is used by user intentionally.