Skip to content

[MRG] Change VotingClassifier estimators by set_params #7484

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 0 commits into from

Conversation

yl565
Copy link
Contributor

@yl565 yl565 commented Sep 24, 2016

PR to #7288


This change is Reviewable

@yl565 yl565 changed the title Change VotingClassifier estimators by set_params [MRG] Change VotingClassifier estimators by set_params Sep 24, 2016
@yl565
Copy link
Contributor Author

yl565 commented Sep 26, 2016

The VotingClassifier is enhanced so that set_params can now be used to set a sub-estimator equals another object with fit and predict attribute. Example:

clf = VotingClassifier([('lr', LogisticRegression()), ('nb', GaussianNB())])
clf.set_params(nb=MultinomialNB()).fit(X, y)

An sub-estimator could also be set to None and it will not be used for fitting.
This enhancement enables gridsearch estimator types.

@jnothman
Copy link
Member

jnothman commented Sep 27, 2016

Our convention is not to raise validation errors in set_params, just as we generally do not validate in __init__. Validation is to be performed in fit.

I'd be interested to see how much you can reuse _BasePipeline if you moved that (or relevant parts of it) to sklearn.utils.metaestimators._BaseComposition (we could even consider making it public). Perhaps it should also be a CompositionMixin rather than a base class.

@amueller amueller added this to the 0.19 milestone Sep 28, 2016
@amueller
Copy link
Member

can you please try to rebase? You also have failing tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants