Skip to content

Conversation

betatim
Copy link
Member

@betatim betatim commented Aug 15, 2025

Reference Issues/PRs

Fixes #31931

What does this implement/fix? Explain your changes.

This adds an option to parametrize_with_checks (and the functions it uses) that allow you to specify "strict mode" for xfailed tests. In strict mode unexpectedly passing tests lead to a test failure. This helps keep the xfail list up to date.

You can configure this globally via pytest.ini for the whole test suite or for individual tests via the xfail marker.

Any other comments?

In cuml we have a lot of common checks that fail and hence are marked as xfail. As we go through fixing things it would be good to have strict mode enabled to help keep the xfail list up to date. We don't really want to turn on strict mode for the whole test suite though (this would be a bigger discussion).

I am not super happy with the testing of this testing infrastructure. For example I only noticed that passing strict=None to the mark overrides the config from pytest.init by trying it out for real, I couldn't figure out a test that would catch it (except the explicit one I added). Ideas welcome, also comments that say "this looks super good enough" :D

In strict mode unexpectedly passing tests lead to a test failure. This
helps keep the xfail list up to date.
Copy link

github-actions bot commented Aug 15, 2025

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: fa808d2. Link to the linter CI: here

Aishwarya0811

This comment was marked as off-topic.

@lesteve
Copy link
Member

lesteve commented Aug 28, 2025

I am not super happy with the testing of this testing infrastructure. For example I only noticed that passing strict=None to the mark overrides the config from pytest.init by trying it out for real, I couldn't figure out a test that would catch it (except the explicit one I added). Ideas welcome, also comments that say "this looks super good enough" :D

To make sure I follow, you are saying that doing pytest.mark.xfail(strict=None) is overriding the pytest config? If that's the case that doesn't surprise me too much the function-level value overrides the config one and strict=None is the same as strict=False. If the worry is more about "oh but what happens if someone changes the pytest config and this makes some test fail, could we avoid this", I would say YAGNI (or in a longer way "let's not try to tackle this for now and revisit if this happens often enough for us to care") 🤷.

betatim and others added 3 commits August 29, 2025 14:34
Co-authored-by: Loïc Estève <loic.esteve@ymail.com>
Co-authored-by: Loïc Estève <loic.esteve@ymail.com>
More consistent with the name of the pytest config option
@betatim
Copy link
Member Author

betatim commented Aug 29, 2025

To make sure I follow, you are saying that doing pytest.mark.xfail(strict=None) is overriding the pytest config? If that's the case that doesn't surprise me too much the function-level value overrides the config one and strict=None is the same as strict=False.

This is what I meant. The thing I didn't love was that the tests won't tell you this, you have to know it (or try it out). Basically this whole section

https://github.com/scikit-learn/scikit-learn/pull/31951/files#diff-49f814863816a0e8d199ba647472450ecc59a4375171db23cb1ca36063d6dc25R462-R468

Copy link
Member

@lesteve lesteve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a small tweak in the test, LGTM.

It would be nice if someone more familiar with the estimator checks could have a look, maybe @adrinjalali?

Copy link
Member

@adrinjalali adrinjalali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could potentially spawn a process and run pytest, but that seems a bit excessive I guess.

betatim and others added 3 commits September 1, 2025 11:10
…ode' into parametrize-with-checks-strict-mode
Co-authored-by: Adrin Jalali <adrin.jalali@gmail.com>
@betatim
Copy link
Member Author

betatim commented Sep 1, 2025

Thanks for the comments.

@adrinjalali adrinjalali merged commit 6c86237 into scikit-learn:main Sep 1, 2025
36 checks passed
@adrinjalali
Copy link
Member

We might even try to have this in sklearn and see how many tests are passing now. Nice idea!

@betatim betatim deleted the parametrize-with-checks-strict-mode branch September 1, 2025 11:51
@betatim
Copy link
Member Author

betatim commented Sep 1, 2025

I'll make a PR enabling this for scikit-learn just to see what happens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow common estimator checks to use xfail_strict=True
4 participants