-
-
Notifications
You must be signed in to change notification settings - Fork 31.9k
bpo-30856: Update TestResult early, without buffering in _Outcome #28180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpo-30856: Update TestResult early, without buffering in _Outcome #28180
Conversation
TestResult methods addFailure(), addError(), addSkip() and addSubTest() are now called immediately after raising an exception in test or finishing a subtest. Previously they were called only after finishing the test clean up.
@@ -70,17 +68,40 @@ def testPartExecutor(self, test_case, isTest=False): | |||
self.expectedFailure = exc_info | |||
else: | |||
self.success = False | |||
self.errors.append((test_case, exc_info)) | |||
if subTest: | |||
self.result.addSubTest(test_case.test_case, test_case, exc_info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_addSkip
checks whether the TestResult given has an addSkip
method. Are we sure that it will always have an addSubTest
method? Sub-test support was added later (BPO-16997) than test skipping support (available since unittest.py was split into a package in bed7d04).
In fact, _Outcome
specifically tests for this with its result_supports_subtests
attribute. I would add a check here for this just to be safe. I know that TestCase will exit early if _Outcome
has result_supports_subtests
set to False
but this is a weird circular dependency. _Outcome
should make this check on its own for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I am sure. testPartExecutor()
with subTest=True
is only called in subTest()
if _outcome.result_supports_subtests
is true.
Adding redundant test would make the code less clear. If add it, the reader should think about yet one possible special case: if subTest
is true, but result
doesn't have addSubTest
. In what circumstances is it possible? Is it covered by tests? The answer is: no, it is not possible, and no tests are needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. The subTest
method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether the TestResult
object used by _Outcome
supports addSubTest
. Those are two different things:
- TestResult is provided by the framework you're using;
- "executing as a sub-test" is signified by user code putting a
self.subTest
context manager in a user test method.
If what you said was true, then we wouldn't have to test for skip support either, right? After all, if SkipTest
is raised then it means unittest
supports it, no? But we check for it anyway.
Lastly, putting an additional if
statement here that uses the already existing self.result_supports_subtests
attribute of that same class feels the correct thing to do because it's _Outcome
's own state. As it stands now, you made it the dependency of the caller of outcome.testPartExecutor
to ensure that outcome.result_supports_subtests
is True
which is weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
subTest
method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether theTestResult
object used by_Outcome
supportsaddSubTest
.
The executor is only used for subtests if the TestResult
object supports addSubTest
. If it does not -- there is no subtests, and the subTest()
context manager is null. No addSubTest -- no subtests.
result_supports_subtests
is not _Outcome
's own state. It is only used outside of _Outcome
. Actually, its only purpose is caching the result of hasattr(result, "addSubTest")
, because it is usually tested several times per test. It is only for microoptimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're describing the current state of how the code works but that is fragile. None of this is explicitly documented. If there ever is a new caller to the executor, it might not check result_supports_subtests
on its own and in that way break.
result_supports_subtests
is not _Outcome's own state. It is only used outside of_Outcome
. Actually, its only purpose is caching the result ofhasattr(result, "addSubTest")
, because it is usually tested several times per test. It is only for microoptimization.
The attribute is certainly available for other users of an _Outcome
object to look at but saying it's not its state is kind of weird when it's defined in its own __init__
. Not using it directly but relying on it being checked by an external user is an anti-pattern, I cannot put it any clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the worst consequence if not add any check or assert?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned in the original comment, I would consider mimicking what _addSkip
is doing a good defensive solution: warn about incompatibility but continue running.
To be quite frank, we either need both checks or none of them. It's unclear to me what kind of TestResult-like classes in modern usage actually lack subtest and skip support. Do you know?
But, again, if the code assumes that skip support can't be guaranteed in TestResult objects, then neither subtest support can. And so, recovering from that seems the sensible thing to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are circling. There is already a check for addSubTest
. It is just earlier in code, because the absence of addSubTest
affects more things. Here, addSubTest
is only called if the subTest
parameter is true, and it can be true only if result_supports_subtests
is true, which is set to the result of the check.
What is the worst consequence if not add the additional (redundant) check which you want to add?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think out disagreement is about the fact that, in my opinion, the current state of things is crossing boundaries of responsibility when calling a method on Object A is only safe because you, the caller, previously have to check that an attribute on Object A is True
.
Functionally everything works today because, as you're saying, TestCase.subTest
has a check for result_supports_subtests
. But that doesn't mean it's good factoring.
In any case, I suspect the likelihood of TestResult-like objects that don't subclass actual TestResult
is pretty low in 2021 so this is all low stakes. I obviously failed to convince you so carry on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think there are some boundaries. _Outcome
is not some class with internal state and public interface. It is just a collection of variables and one function. Its only purpose is to reduce code repetition. All its state is open for code which uses it.
It does not only work today. I am steel seeking for scenario in which the additional check would help.
As for supporting TestResult-like objects that don't subclass actual TestResult, I think it is a time to start deprecating this. But it does not have any relation to this code.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
I am merging it now. We can continue the discussion if you show a case in which additional checks would be helpful. |
…thonGH-28180) TestResult methods addFailure(), addError(), addSkip() and addSubTest() are now called immediately after raising an exception in test or finishing a subtest. Previously they were called only after finishing the test clean up.
…thon#28180. After the change, the errors from the test case were no longer buffered in `_Outcome`. Instead, we need to take a note of errors on the `TestResult` before, then compare errors after. PiperOrigin-RevId: 461070558 Change-Id: I7d9f769cd46143a886bc5edeb224804acd8dbaf9
This change makes it no longer possible to get test errors during the execution of a cleanup function added by The errors are now immediately sent to Line 74 in 26a0282
Lines 94 to 99 in 26a0282
The actually implementation of So the question is: After this change, how can a cleanup function get test errors and act accordingly? Thanks for your help in advance. |
TestResult methods addFailure(), addError(), addSkip() and
addSubTest() are now called immediately after raising an exception
in test or finishing a subtest. Previously they were called only
after finishing the test clean up.
https://bugs.python.org/issue30856