Skip to content

bpo-30856: Update TestResult early, without buffering in _Outcome #28180

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

serhiy-storchaka
Copy link
Member

@serhiy-storchaka serhiy-storchaka commented Sep 5, 2021

TestResult methods addFailure(), addError(), addSkip() and
addSubTest() are now called immediately after raising an exception
in test or finishing a subtest. Previously they were called only
after finishing the test clean up.

https://bugs.python.org/issue30856

TestResult methods addFailure(), addError(), addSkip() and
addSubTest() are now called immediately after raising an exception
in test or finishing a subtest.  Previously they were called only
after finishing the test clean up.
@@ -70,17 +68,40 @@ def testPartExecutor(self, test_case, isTest=False):
self.expectedFailure = exc_info
else:
self.success = False
self.errors.append((test_case, exc_info))
if subTest:
self.result.addSubTest(test_case.test_case, test_case, exc_info)
Copy link
Contributor

@ambv ambv Sep 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_addSkip checks whether the TestResult given has an addSkip method. Are we sure that it will always have an addSubTest method? Sub-test support was added later (BPO-16997) than test skipping support (available since unittest.py was split into a package in bed7d04).

In fact, _Outcome specifically tests for this with its result_supports_subtests attribute. I would add a check here for this just to be safe. I know that TestCase will exit early if _Outcome has result_supports_subtests set to False but this is a weird circular dependency. _Outcome should make this check on its own for consistency.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am sure. testPartExecutor() with subTest=True is only called in subTest() if _outcome.result_supports_subtests is true.

Adding redundant test would make the code less clear. If add it, the reader should think about yet one possible special case: if subTest is true, but result doesn't have addSubTest. In what circumstances is it possible? Is it covered by tests? The answer is: no, it is not possible, and no tests are needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree. The subTest method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether the TestResult object used by _Outcome supports addSubTest. Those are two different things:

  • TestResult is provided by the framework you're using;
  • "executing as a sub-test" is signified by user code putting a self.subTest context manager in a user test method.

If what you said was true, then we wouldn't have to test for skip support either, right? After all, if SkipTest is raised then it means unittest supports it, no? But we check for it anyway.

Lastly, putting an additional if statement here that uses the already existing self.result_supports_subtests attribute of that same class feels the correct thing to do because it's _Outcome's own state. As it stands now, you made it the dependency of the caller of outcome.testPartExecutor to ensure that outcome.result_supports_subtests is True which is weird.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The subTest method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether the TestResult object used by _Outcome supports addSubTest.

The executor is only used for subtests if the TestResult object supports addSubTest. If it does not -- there is no subtests, and the subTest() context manager is null. No addSubTest -- no subtests.

result_supports_subtests is not _Outcome's own state. It is only used outside of _Outcome. Actually, its only purpose is caching the result of hasattr(result, "addSubTest"), because it is usually tested several times per test. It is only for microoptimization.

Copy link
Contributor

@ambv ambv Sep 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're describing the current state of how the code works but that is fragile. None of this is explicitly documented. If there ever is a new caller to the executor, it might not check result_supports_subtests on its own and in that way break.

result_supports_subtests is not _Outcome's own state. It is only used outside of _Outcome. Actually, its only purpose is caching the result of hasattr(result, "addSubTest"), because it is usually tested several times per test. It is only for microoptimization.

The attribute is certainly available for other users of an _Outcome object to look at but saying it's not its state is kind of weird when it's defined in its own __init__. Not using it directly but relying on it being checked by an external user is an anti-pattern, I cannot put it any clearer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the worst consequence if not add any check or assert?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in the original comment, I would consider mimicking what _addSkip is doing a good defensive solution: warn about incompatibility but continue running.

To be quite frank, we either need both checks or none of them. It's unclear to me what kind of TestResult-like classes in modern usage actually lack subtest and skip support. Do you know?

But, again, if the code assumes that skip support can't be guaranteed in TestResult objects, then neither subtest support can. And so, recovering from that seems the sensible thing to do.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are circling. There is already a check for addSubTest. It is just earlier in code, because the absence of addSubTest affects more things. Here, addSubTest is only called if the subTest parameter is true, and it can be true only if result_supports_subtests is true, which is set to the result of the check.

What is the worst consequence if not add the additional (redundant) check which you want to add?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think out disagreement is about the fact that, in my opinion, the current state of things is crossing boundaries of responsibility when calling a method on Object A is only safe because you, the caller, previously have to check that an attribute on Object A is True.

Functionally everything works today because, as you're saying, TestCase.subTest has a check for result_supports_subtests. But that doesn't mean it's good factoring.

In any case, I suspect the likelihood of TestResult-like objects that don't subclass actual TestResult is pretty low in 2021 so this is all low stakes. I obviously failed to convince you so carry on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think there are some boundaries. _Outcome is not some class with internal state and public interface. It is just a collection of variables and one function. Its only purpose is to reduce code repetition. All its state is open for code which uses it.

It does not only work today. I am steel seeking for scenario in which the additional check would help.

As for supporting TestResult-like objects that don't subclass actual TestResult, I think it is a time to start deprecating this. But it does not have any relation to this code.

Co-authored-by: Łukasz Langa <lukasz@langa.pl>
@ambv
Copy link
Contributor

ambv commented Sep 17, 2021

BTW, the original introduction of _Outcome was in BPO-10611 and I checked that this PR (GH-28180) preserves the fixed behavior that Michael wanted in that issue. 👍🏻

@serhiy-storchaka
Copy link
Member Author

I am merging it now. We can continue the discussion if you show a case in which additional checks would be helpful.

@serhiy-storchaka serhiy-storchaka merged commit 664448d into python:main Sep 19, 2021
@serhiy-storchaka serhiy-storchaka deleted the unittest-result-immediate-update branch September 19, 2021 12:24
niyas-sait pushed a commit to niyas-sait/cpython that referenced this pull request Sep 21, 2021
…thonGH-28180)

TestResult methods addFailure(), addError(), addSkip() and
addSubTest() are now called immediately after raising an exception
in test or finishing a subtest.  Previously they were called only
after finishing the test clean up.
yilei added a commit to yilei/abseil-py that referenced this pull request Jul 18, 2022
…thon#28180.

After the change, the errors from the test case were no longer buffered in `_Outcome`. Instead, we need to take a note of errors on the `TestResult` before, then compare errors after.

PiperOrigin-RevId: 461070558
Change-Id: I7d9f769cd46143a886bc5edeb224804acd8dbaf9
@jiasli
Copy link
Contributor

jiasli commented Oct 19, 2023

This change makes it no longer possible to get test errors during the execution of a cleanup function added by unittest.case.TestCase.addCleanup.

The errors are now immediately sent to result by calling addError on the result object:

_addError(self.result, test_case, exc_info)

def _addError(result, test, exc_info):
if result is not None and exc_info is not None:
if issubclass(exc_info[0], test.failureException):
result.addFailure(test, exc_info)
else:
result.addError(test, exc_info)

The actually implementation of result may not necessarily be a subclass unittest.result.TestResult. For example, pytest passes _pytest.unittest.TestCaseFunction as result. There is no consistent way of retrieving test errors from result, as the implementation of result is test-runner-dependent.

So the question is: After this change, how can a cleanup function get test errors and act accordingly?

Thanks for your help in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type-feature A feature request or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants