-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
FEAT rfecv: add support and ranking for each cv and step #30179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT rfecv: add support and ranking for each cv and step #30179
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @MarieS-WiMLDS .
This looks good overall. It would need a changelog entry, as well as tests for the new feature.
sklearn/feature_selection/_rfe.py
Outdated
ranking(k) : ndarray of shape (n_subsets_of_features,) | ||
The cross-validation ranking across (k)th fold. | ||
|
||
support(k) : ndarray of shape (n_subsets_of_features,) | ||
The cross-validation supports across (k)th fold. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nice to update this part of the docstring to mention when each key would exist here, since it depends on constructor arguments.
Also, the .. versionadded
directives need to be there for each newly added key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the version, I didn't find anything in the docs (cf research). I wrote down a version by myself, but i'm unsure as I guess it's something automated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nice to update this part of the docstring to mention when each key would exist here, since it depends on constructor arguments.
I don't get this, it should appear all the time in the dict cv_results_ when RFECV is called, just as "split(k)_test_score" since it's built with the same process?
Tests are failing. Let me know if you need help on the PR. |
hi @adrinjalali, did you have a chance to get a new look please? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits, otherwise LGTM. Thanks @MarieS-WiMLDS
doc/whats_new/upcoming_changes/sklearn.feature_selection/30179.enhancement.rst
Outdated
Show resolved
Hide resolved
sklearn/feature_selection/_rfe.py
Outdated
split(k)_ranking : ndarray of shape (n_subsets_of_features,) | ||
The cross-validation rankings across (k)th fold. | ||
|
||
.. versionadded:: 1.7 | ||
|
||
split(k)_support : ndarray of shape (n_subsets_of_features,) | ||
The cross-validation supports across (k)th fold. | ||
|
||
.. versionadded:: 1.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we have links to a part of the user guide or an example where "ranking" and "support" are nicely explained? I don't find the intuitive enough for somebody who reads this the first time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found something for the ranking, but nothing for the support. I enriched the description.
eaeb298
to
8b075cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks @MarieS-WiMLDS
Reference Issues/PRs
Fixes #17782
What does this implement/fix? Explain your changes.
It gives the option to access the ranking and support in each iteration and cv step.
As of today, it is possible to know what looks like to be the best number of features to chose, but we can't know if each cv converged towards the same best selection. Having access to these also allows to have a better view on the stability and therefore reliability.
Any other comments?
I chose to have the same pattern as currently available in split(k)_test_score. However, it doesn't seem very user-friendly. I expect a user to be willing to know what happens inside a whole cv process, not having results grouped per cv split.
I would appreciate the validation of this technical choice before going on with writing more tests & docs.