-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG] Inherit LinearModels from _LearntSelectorMixin #4241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ping @agramfort @amueller Btw, why does _LearntSelectorMixin inherit from TransformerMixin? These cause test errors in the common tests (related to transformers) when I inherited LinearModels from it. |
Because it introduces a |
I don't see a failure in the common tests. |
Please see also #2160 |
if threshold is None: | ||
# Lasso has a l1 penalty but no penalty param. | ||
if (hasattr(self, "penalty") and self.penalty == "l1" or | ||
'Lasso' in self.__class__.__name__): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about ElasticNet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it default to median for ENet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant that even Non-transformers, can have transform methods.
IMO, that's a contradiction, effectively. By implementing fit() and
transform(), they implement the Transformer API, and are thus transformers.
The common tests should be explicitly non-mutually-exclusive between
predictors and transformers. What sorts of tests were failing?
Having said that, I'm not sure I agree with this design and might rather
see this style of feature selection come through a wrapping meta-estimator
than a mixin.
On 12 February 2015 at 17:40, Manoj Kumar notifications@github.com wrote:
In sklearn/feature_selection/from_model.py
#4241 (comment)
:@@ -68,11 +71,16 @@ def transform(self, X, threshold=None):
# Retrieve threshold if threshold is None:
if hasattr(self, "penalty") and self.penalty == "l1":
threshold = getattr(self, "threshold", None)
if threshold is None:
# Lasso has a l1 penalty but no penalty param.
if (hasattr(self, "penalty") and self.penalty == "l1" or
'Lasso' in self.**class**.**name**):
Should it default to median for ENet?
—
Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4241/files#r24561488.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Yes that seems a better idea.
However, sorry if this question seems dumb, but is it worth writing a separate meta-estimator, to just do a masking of X corresponding to coefs, below a certain threshold value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see work has already been done in deprecating _LearntSelectorMixin and having a SelectfromModel estimator. How far is it from being merged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't mean to comment on this line of code. @maheshakya tried implementing a meta-estimator version at #3011. But there are arguments in both directions.
I meant that even Non-transformers, can have transform methods. So if I inherit say ElasticNet from _LearntSelectorMixin, technically it is now a Transformer also. It passed the common tests, since I removed the inheritance. |
Closed in favor of #4242 |
Fixes #4180