-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
DEP loss_ attribute in gradient boosting #23079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DEP loss_ attribute in gradient boosting #23079
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deprecating loss_
was mentioned once here: #15139 (comment) I am happy with deprecating it as well.
LGTM
Meaning, this improvement will have to wait until 1.3 (PR will be large enough without keeping backwards compat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
For most of them, your can use the equivalent function under |
Please see the link in the original comment for code sample. |
I meant: What is your use case? In words, not in code. Note that |
the code allows to see the loss as a function of stage, which lets me figure out the optimal number of estimators. |
Also, I see nothing like |
Note that this would only give the training error. A better strategy is looking at the test/validation error, preferably with cross validation, e.g. with |
I am passing |
|
Has been deprecated in sklearn, see scikit-learn/scikit-learn#23079
Has been deprecated in sklearn, see scikit-learn/scikit-learn#23079
Has been deprecated in sklearn, see scikit-learn/scikit-learn#23079
Has been deprecated in sklearn, see scikit-learn/scikit-learn#23079
Reference Issues/PRs
None.
What does this implement/fix? Explain your changes.
This PR deprecates the attribute
loss_
ofGradientBoostingClassifier
andGradientBoostingRegressor
.Any other comments?
This will greatly simplify using the common losses under
sklearn._loss
in (old) gradient boosting.