Skip to content

[MRG] MNT Ignore xfail_checks in check_estimator #17219

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

NicolasHug
Copy link
Member

@NicolasHug NicolasHug commented May 14, 2020

Fixes #16958

Alternative to and Closes #16963

Alternative to and Closes #17222

CC @thomasjpfan @rth.

I would prefer this approach, considering how much simpler it is. I agree with what @rth said: for a more advanced control, developers should be using parametrize_with_checks and pytest.

Copy link
Member

@rth rth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @NicolasHug ! I like the fact that it's very simple.

I'm wondering if we should raise a warning for each xfailed test or not (in the _is_xfail function).

@NicolasHug
Copy link
Member Author

I'm wondering if we should raise a warning

The more I think about it, the more I think the answer is no. To me it makes sense to rely on pytest for anything more than just ignoring the test.

And I mean, pytest is definitely a standard now. It's not like we're asking developers to use an esoteric testing framework that nobody uses.

Copy link
Member

@rth rth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, LGTM. We do mention in the documentation that users should be very careful about _xfail_checks.

@rth
Copy link
Member

rth commented May 14, 2020

FWIW this is very similar to what I proposed in #16507.

I changed my mind about not raising the warning. We raise the warning for skipped tests in check_estimator so I think we should also do that for xfail_checks (or do for neither). Skipped and xfailed tests are basically the same thing with minor variations.

@thomasjpfan
Copy link
Member

Since parametrize_with_checks uses check_estimator to generate the tests, this PR will filter out the xfail test from pytest as well.

@rth
Copy link
Member

rth commented May 14, 2020

Thanks for catching that! Should we use, _yield_all_checks directly in parametrize_with_checks, even if it means a couple of repeated lines?

I think the fact that parametrize_with_checks relies on check_estimator is the source of a number of constraints we have been experiencing when modifying anything in this mechanism..

@NicolasHug
Copy link
Member Author

NicolasHug commented May 14, 2020

Good catch, thanks, should be fixed.

I think the fact that parametrize_with_checks relies on check_estimator is the source of a number of constraints we have been experiencing when modifying anything in this mechanism..

Totally agree.

I'm also quite puzzled by the need to pass name everywhere... that parameter is never used in the checks themselves. well it's used in some tests but we could just check types instead of names, or retrieve the name from the instances.

@NicolasHug
Copy link
Member Author

Opened #17222 which is the same except that checks in the tag will be skipped with a SkipTestWarning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

xfail_checks tag only works with parametrize_with_checks
3 participants