-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
FIX delete feature_names_in_ when refitting on a ndarray #21389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX delete feature_names_in_ when refitting on a ndarray #21389
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. An entry in the changelog could be good.
Hum, not that simple since we have several estimators that call validate_data twice, thus the second one is on a ndarray which now deletes the attribute... Maybe a good opportunity to remove these double validations ? :) |
Indeed, I don't see any easy alternative. |
Also while we are at it, the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I approved a second-time :). This time the tests should pass.
order="C", | ||
accept_large_sparse=False, | ||
) | ||
delattr(self, "classes_") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change really needed to fix the original problem? If so, it should probably be documented in the changelog.
If not needed, I would rather move it outside of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's necessary because we now delegate the validation to _partial_fit which will reset n_features based on the existence of classes_
. But it has no impact on the user since the attribute will still be set after
I added a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me it's a design issue of having fit calling partial_fit but I don't want to fix this in this PR :)
@@ -902,4 +882,8 @@ def perplexity(self, X, sub_sampling=False): | |||
score : float | |||
Perplexity score. | |||
""" | |||
check_is_fitted(self) | |||
X = self._check_non_neg_array( | |||
X, reset_n_features=True, whom="LatentDirichletAllocation.perplexity" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should reset the number of feature and their names when computing the perplexity of a dataset:
X, reset_n_features=True, whom="LatentDirichletAllocation.perplexity" | |
X, reset_n_features=False, whom="LatentDirichletAllocation.perplexity" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It felt weird but I did not want to change the existing behavior. Do you think I should change it anyway ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, maybe with a small non-regression test.
I cannot spot why this test is only failing on one single CI job. It seems that all source of non determinism are fixed by setting |
Actually no, the |
It works! @thomasjpfan @glemaitre any second review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…n#21389) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
…n#21389) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Fixes #21383