-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
TST Checks can now skip test based on estimator tag _xfail_test #16510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sklearn/utils/estimator_checks.py
Outdated
@wraps(check) | ||
def wrapper(name, estimator_orig): | ||
xfail_checks = _safe_tags(estimator_orig, '_xfail_test') | ||
check_name = _set_check_estimator_ids(check).split("(", maxsplit=1)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need the .split
, it's intentionally there to differentiate checks with same names but different arguments see e.g. #16507
sklearn/utils/estimator_checks.py
Outdated
msg = xfail_checks[check_name] | ||
try: | ||
import pytest | ||
pytest.xfail(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree we need something like it, and indeed raising SkipTest on xfail was missing in the original PR, I added it differently here #16507 (comment).
However, with pytest installed wouldn't this stop exectution with an xfail exception, and so we will never get xpass. Maybe there is some way around it with pytest.param(..., marks=pytest.mark.xfail())
not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated this PR with another approach. The parametrize
is now generated in such a way that the xfails are marked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks have you checked that this indeed runs the tests (I know it should) i.e. that marking a passing test with xfail results in an XPASS?
sklearn/utils/estimator_checks.py
Outdated
return pytest.param( | ||
estimator, check, marks=pytest.mark.xfail(reason=msg)) | ||
|
||
except KeyError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not very keen on this. KeyError can happen for a variety of reasons say in pytest.params
in which case it will silently pass that error.
I would rather we did,
msg = xfail_checks.get(check_name, None)
if msg is not None:
or something similar.
Generally the case of a KeyError is the default case here (since most tests are not xfailed) and triggering/catching an exception there being expensive is suboptimal https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Exceptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense in this case. I have made the change.
sklearn/utils/estimator_checks.py
Outdated
msg = xfail_checks[check_name] | ||
try: | ||
import pytest | ||
pytest.xfail(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks have you checked that this indeed runs the tests (I know it should) i.e. that marking a passing test with xfail results in an XPASS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment otherwise LGTM, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following will fail:
pytest sklearn/tests/test_common.py -k "NuSVC()-check_class_weight" --runxfail
The following will pass with a xfail
message in the summary.
pytest sklearn/tests/test_common.py -k "NuSVC()-check_class_weight" -rxXs
sklearn/utils/estimator_checks.py
Outdated
return pytest.param( | ||
estimator, check, marks=pytest.mark.xfail(reason=msg)) | ||
|
||
except KeyError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense in this case. I have made the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @thomasjpfan ! Merging as it shouldn't be too controversial, and I also double checked that it works as expected.
Reference Issues/PRs
Continues #16502
What does this implement/fix? Explain your changes.
The checks can now skip tests based on the
_xfail_test
.