Skip to content

[MRG] FIX ignore null weight when computing estimator error in AdaBoostRegressor #14294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Oct 25, 2019

Conversation

glemaitre
Copy link
Member

@glemaitre glemaitre commented Jul 8, 2019

AdaBoostRegressor suffers from a bug where the error was normalized using the max of the absolute error on all prediction even if they were corresponding to a null-weight in sample_weigth

@rth
Copy link
Member

rth commented Jul 8, 2019

BTW, this seems related to #14286 which investigates a similar issue with SVM.

@glemaitre glemaitre changed the title FIX ignore null weight when computing estimator error in AdaBoostRegressor [MRG] FIX ignore null weight when computing estimator error in AdaBoostRegressor Jul 8, 2019
@glemaitre glemaitre force-pushed the is/fix_adaboost_regressor branch from ede568e to 6902eff Compare September 10, 2019 07:45
@glemaitre
Copy link
Member Author

ping @rth @jeremiedbb I think that you are the best to review this bug fix.

Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not using points with zero weight to compute the error seems fair.

Below are some comments.

@glemaitre glemaitre force-pushed the is/fix_adaboost_regressor branch from 2f27d59 to 810c637 Compare September 12, 2019 09:53
Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besides minor comments, LGTM.

glemaitre and others added 4 commits September 12, 2019 17:36
Co-Authored-By: jeremiedbb <34657725+jeremiedbb@users.noreply.github.com>
@glemaitre
Copy link
Member Author

@adrinjalali I'm pinging you aggressively ;)

Copy link
Member

@adrinjalali adrinjalali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to double check a few things before I can be sure about its correctness and completeness. But I was wondering if it'd be easier to mask the samples and the sample weight right at the beginning of the fit, and continue with all of what's left.

P.S. thanks for the very aggressive ping :P

@glemaitre
Copy link
Member Author

I was wondering if it'd be easier to mask the samples and the sample weight right at the beginning of the fit, and continue with all of what's left.

Nop because the bootstrapping will lead to different results since the input data will be different due to masking.

@glemaitre
Copy link
Member Author

@adrinjalali Any other comments?

Copy link
Member

@adrinjalali adrinjalali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the small nit, LGTM, thanks @glemaitre

@glemaitre
Copy link
Member Author

Ready to be merged @adrinjalali @jeremiedbb

@adrinjalali adrinjalali merged commit 1888a96 into scikit-learn:master Oct 25, 2019
@adrinjalali
Copy link
Member

Thanks @glemaitre :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants