-
-
Notifications
You must be signed in to change notification settings - Fork 26k
Rescale regularization terms of NMF #20512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rescale regularization terms of NMF #20512
Conversation
/cc @TomDLT 😉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. Just a few comments below:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but I think that there is more discussion to come on the issue.
( | ||
self._l1_reg_W, | ||
self._l1_reg_H, | ||
self._l2_reg_W, | ||
self._l2_reg_H, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙈
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it quite nice and readable personally.
I am not sure what you mean by that @glemaitre. |
I was referring to #20484 (comment) From what I understand is that this PR will make that we have different ways of treating penalties across estimators. |
We already do treat penalties differently. Ridge does not scale alpha while Lasso does for instance. The main issue with NMF is that we have 2 regularization terms (for W and for H) which are not balanced and there is no way for the user to balance them. For most estimators there's only 1 regularization term and even if it's not scaled properly, the user can at least change alpha. |
Co-authored-by: Tom Dupré la Tour <tom.dupre-la-tour@m4x.org>
Co-authored-by: Tom Dupré la Tour <tom.dupre-la-tour@m4x.org>
…kit-learn into change-nmf-regularization
OK. Merging then. Thanks @jeremiedbb |
Co-authored-by: Tom Dupré la Tour <tom.dupre-la-tour@m4x.org>
Fixes #20484
Take over #5296
alpha_W
andalpha_H
replacealpha
andregularization
which are deprecated. During the deprecation cycle the new parameters are used if they are not left to their default values (then the old params are ignored) and the scaling of the regularization terms is only done if the new params are used.