Skip to content

bpo-30856: Update TestResult early, without buffering in _Outcome #28180

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 30 additions & 36 deletions Lib/unittest/case.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,10 @@ def __init__(self, result=None):
self.result = result
self.result_supports_subtests = hasattr(result, "addSubTest")
self.success = True
self.skipped = []
self.expectedFailure = None
self.errors = []

@contextlib.contextmanager
def testPartExecutor(self, test_case, isTest=False):
def testPartExecutor(self, test_case, subTest=False):
old_success = self.success
self.success = True
try:
Expand All @@ -61,7 +59,7 @@ def testPartExecutor(self, test_case, isTest=False):
raise
except SkipTest as e:
self.success = False
self.skipped.append((test_case, str(e)))
_addSkip(self.result, test_case, str(e))
except _ShouldStop:
pass
except:
Expand All @@ -70,17 +68,36 @@ def testPartExecutor(self, test_case, isTest=False):
self.expectedFailure = exc_info
else:
self.success = False
self.errors.append((test_case, exc_info))
if subTest:
self.result.addSubTest(test_case.test_case, test_case, exc_info)
Copy link
Contributor

@ambv ambv Sep 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_addSkip checks whether the TestResult given has an addSkip method. Are we sure that it will always have an addSubTest method? Sub-test support was added later (BPO-16997) than test skipping support (available since unittest.py was split into a package in bed7d04).

In fact, _Outcome specifically tests for this with its result_supports_subtests attribute. I would add a check here for this just to be safe. I know that TestCase will exit early if _Outcome has result_supports_subtests set to False but this is a weird circular dependency. _Outcome should make this check on its own for consistency.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am sure. testPartExecutor() with subTest=True is only called in subTest() if _outcome.result_supports_subtests is true.

Adding redundant test would make the code less clear. If add it, the reader should think about yet one possible special case: if subTest is true, but result doesn't have addSubTest. In what circumstances is it possible? Is it covered by tests? The answer is: no, it is not possible, and no tests are needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree. The subTest method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether the TestResult object used by _Outcome supports addSubTest. Those are two different things:

  • TestResult is provided by the framework you're using;
  • "executing as a sub-test" is signified by user code putting a self.subTest context manager in a user test method.

If what you said was true, then we wouldn't have to test for skip support either, right? After all, if SkipTest is raised then it means unittest supports it, no? But we check for it anyway.

Lastly, putting an additional if statement here that uses the already existing self.result_supports_subtests attribute of that same class feels the correct thing to do because it's _Outcome's own state. As it stands now, you made it the dependency of the caller of outcome.testPartExecutor to ensure that outcome.result_supports_subtests is True which is weird.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The subTest method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether the TestResult object used by _Outcome supports addSubTest.

The executor is only used for subtests if the TestResult object supports addSubTest. If it does not -- there is no subtests, and the subTest() context manager is null. No addSubTest -- no subtests.

result_supports_subtests is not _Outcome's own state. It is only used outside of _Outcome. Actually, its only purpose is caching the result of hasattr(result, "addSubTest"), because it is usually tested several times per test. It is only for microoptimization.

Copy link
Contributor

@ambv ambv Sep 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're describing the current state of how the code works but that is fragile. None of this is explicitly documented. If there ever is a new caller to the executor, it might not check result_supports_subtests on its own and in that way break.

result_supports_subtests is not _Outcome's own state. It is only used outside of _Outcome. Actually, its only purpose is caching the result of hasattr(result, "addSubTest"), because it is usually tested several times per test. It is only for microoptimization.

The attribute is certainly available for other users of an _Outcome object to look at but saying it's not its state is kind of weird when it's defined in its own __init__. Not using it directly but relying on it being checked by an external user is an anti-pattern, I cannot put it any clearer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the worst consequence if not add any check or assert?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in the original comment, I would consider mimicking what _addSkip is doing a good defensive solution: warn about incompatibility but continue running.

To be quite frank, we either need both checks or none of them. It's unclear to me what kind of TestResult-like classes in modern usage actually lack subtest and skip support. Do you know?

But, again, if the code assumes that skip support can't be guaranteed in TestResult objects, then neither subtest support can. And so, recovering from that seems the sensible thing to do.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are circling. There is already a check for addSubTest. It is just earlier in code, because the absence of addSubTest affects more things. Here, addSubTest is only called if the subTest parameter is true, and it can be true only if result_supports_subtests is true, which is set to the result of the check.

What is the worst consequence if not add the additional (redundant) check which you want to add?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think out disagreement is about the fact that, in my opinion, the current state of things is crossing boundaries of responsibility when calling a method on Object A is only safe because you, the caller, previously have to check that an attribute on Object A is True.

Functionally everything works today because, as you're saying, TestCase.subTest has a check for result_supports_subtests. But that doesn't mean it's good factoring.

In any case, I suspect the likelihood of TestResult-like objects that don't subclass actual TestResult is pretty low in 2021 so this is all low stakes. I obviously failed to convince you so carry on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think there are some boundaries. _Outcome is not some class with internal state and public interface. It is just a collection of variables and one function. Its only purpose is to reduce code repetition. All its state is open for code which uses it.

It does not only work today. I am steel seeking for scenario in which the additional check would help.

As for supporting TestResult-like objects that don't subclass actual TestResult, I think it is a time to start deprecating this. But it does not have any relation to this code.

else:
_addError(self.result, test_case, exc_info)
# explicitly break a reference cycle:
# exc_info -> frame -> exc_info
exc_info = None
else:
if self.result_supports_subtests and self.success:
self.errors.append((test_case, None))
if subTest and self.success:
self.result.addSubTest(test_case.test_case, test_case, None)
finally:
self.success = self.success and old_success


def _addSkip(result, test_case, reason):
addSkip = getattr(result, 'addSkip', None)
if addSkip is not None:
addSkip(test_case, reason)
else:
warnings.warn("TestResult has no addSkip method, skips not reported",
RuntimeWarning, 2)
result.addSuccess(test_case)

def _addError(result, test, exc_info):
if result is not None and exc_info is not None:
if issubclass(exc_info[0], test.failureException):
result.addFailure(test, exc_info)
else:
result.addError(test, exc_info)

def _id(obj):
return obj

Expand Down Expand Up @@ -467,15 +484,6 @@ def __repr__(self):
return "<%s testMethod=%s>" % \
(strclass(self.__class__), self._testMethodName)

def _addSkip(self, result, test_case, reason):
addSkip = getattr(result, 'addSkip', None)
if addSkip is not None:
addSkip(test_case, reason)
else:
warnings.warn("TestResult has no addSkip method, skips not reported",
RuntimeWarning, 2)
result.addSuccess(test_case)

@contextlib.contextmanager
def subTest(self, msg=_subtest_msg_sentinel, **params):
"""Return a context manager that will return the enclosed block
Expand All @@ -494,7 +502,7 @@ def subTest(self, msg=_subtest_msg_sentinel, **params):
params_map = parent.params.new_child(params)
self._subtest = _SubTest(self, msg, params_map)
try:
with self._outcome.testPartExecutor(self._subtest, isTest=True):
with self._outcome.testPartExecutor(self._subtest, subTest=True):
yield
if not self._outcome.success:
result = self._outcome.result
Expand All @@ -507,16 +515,6 @@ def subTest(self, msg=_subtest_msg_sentinel, **params):
finally:
self._subtest = parent

def _feedErrorsToResult(self, result, errors):
for test, exc_info in errors:
if isinstance(test, _SubTest):
result.addSubTest(test.test_case, test, exc_info)
elif exc_info is not None:
if issubclass(exc_info[0], self.failureException):
result.addFailure(test, exc_info)
else:
result.addError(test, exc_info)

def _addExpectedFailure(self, result, exc_info):
try:
addExpectedFailure = result.addExpectedFailure
Expand Down Expand Up @@ -574,7 +572,7 @@ def run(self, result=None):
# If the class or method was skipped.
skip_why = (getattr(self.__class__, '__unittest_skip_why__', '')
or getattr(testMethod, '__unittest_skip_why__', ''))
self._addSkip(result, self, skip_why)
_addSkip(result, self, skip_why)
return result

expecting_failure = (
Expand All @@ -589,16 +587,13 @@ def run(self, result=None):
self._callSetUp()
if outcome.success:
outcome.expecting_failure = expecting_failure
with outcome.testPartExecutor(self, isTest=True):
with outcome.testPartExecutor(self):
self._callTestMethod(testMethod)
outcome.expecting_failure = False
with outcome.testPartExecutor(self):
self._callTearDown()

self.doCleanups()
for test, reason in outcome.skipped:
self._addSkip(result, test, reason)
self._feedErrorsToResult(result, outcome.errors)

if outcome.success:
if expecting_failure:
if outcome.expectedFailure:
Expand All @@ -609,11 +604,10 @@ def run(self, result=None):
result.addSuccess(self)
return result
finally:
# explicitly break reference cycles:
# outcome.errors -> frame -> outcome -> outcome.errors
# explicitly break reference cycle:
# outcome.expectedFailure -> frame -> outcome -> outcome.expectedFailure
outcome.errors.clear()
outcome.expectedFailure = None
outcome = None

# clear the outcome, no more needed
self._outcome = None
Expand Down
28 changes: 14 additions & 14 deletions Lib/unittest/test/test_case.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,8 @@ def test(self):
super(Foo, self).test()
raise RuntimeError('raised by Foo.test')

expected = ['startTest', 'setUp', 'test', 'tearDown',
'addError', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addError', 'tearDown', 'stopTest']
Foo(events).run(result)
self.assertEqual(events, expected)

Expand All @@ -216,7 +216,7 @@ def test(self):
raise RuntimeError('raised by Foo.test')

expected = ['startTestRun', 'startTest', 'setUp', 'test',
'tearDown', 'addError', 'stopTest', 'stopTestRun']
'addError', 'tearDown', 'stopTest', 'stopTestRun']
Foo(events).run()
self.assertEqual(events, expected)

Expand All @@ -236,8 +236,8 @@ def test(self):
super(Foo, self).test()
self.fail('raised by Foo.test')

expected = ['startTest', 'setUp', 'test', 'tearDown',
'addFailure', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addFailure', 'tearDown', 'stopTest']
Foo(events).run(result)
self.assertEqual(events, expected)

Expand All @@ -252,7 +252,7 @@ def test(self):
self.fail('raised by Foo.test')

expected = ['startTestRun', 'startTest', 'setUp', 'test',
'tearDown', 'addFailure', 'stopTest', 'stopTestRun']
'addFailure', 'tearDown', 'stopTest', 'stopTestRun']
events = []
Foo(events).run()
self.assertEqual(events, expected)
Expand Down Expand Up @@ -353,19 +353,19 @@ def test(self):
def test_run_call_order__subtests(self):
events = []
result = LoggingResult(events)
expected = ['startTest', 'setUp', 'test', 'tearDown',
expected = ['startTest', 'setUp', 'test',
'addSubTestFailure', 'addSubTestSuccess',
'addSubTestFailure', 'addSubTestFailure',
'addSubTestSuccess', 'addError', 'stopTest']
'addSubTestSuccess', 'addError', 'tearDown', 'stopTest']
self._check_call_order__subtests(result, events, expected)

def test_run_call_order__subtests_legacy(self):
# With a legacy result object (without an addSubTest method),
# text execution stops after the first subtest failure.
events = []
result = LegacyLoggingResult(events)
expected = ['startTest', 'setUp', 'test', 'tearDown',
'addFailure', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addFailure', 'tearDown', 'stopTest']
self._check_call_order__subtests(result, events, expected)

def _check_call_order__subtests_success(self, result, events, expected_events):
Expand All @@ -386,9 +386,9 @@ def test_run_call_order__subtests_success(self):
result = LoggingResult(events)
# The 6 subtest successes are individually recorded, in addition
# to the whole test success.
expected = (['startTest', 'setUp', 'test', 'tearDown']
expected = (['startTest', 'setUp', 'test']
+ 6 * ['addSubTestSuccess']
+ ['addSuccess', 'stopTest'])
+ ['tearDown', 'addSuccess', 'stopTest'])
self._check_call_order__subtests_success(result, events, expected)

def test_run_call_order__subtests_success_legacy(self):
Expand All @@ -413,8 +413,8 @@ def test(self):
self.fail('failure')
self.fail('failure')

expected = ['startTest', 'setUp', 'test', 'tearDown',
'addSubTestFailure', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addSubTestFailure', 'tearDown', 'stopTest']
Foo(events).run(result)
self.assertEqual(events, expected)

Expand Down
8 changes: 4 additions & 4 deletions Lib/unittest/test/test_functiontestcase.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ def test():
def tearDown():
events.append('tearDown')

expected = ['startTest', 'setUp', 'test', 'tearDown',
'addError', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addError', 'tearDown', 'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)

Expand All @@ -84,8 +84,8 @@ def test():
def tearDown():
events.append('tearDown')

expected = ['startTest', 'setUp', 'test', 'tearDown',
'addFailure', 'stopTest']
expected = ['startTest', 'setUp', 'test',
'addFailure', 'tearDown', 'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)

Expand Down
17 changes: 12 additions & 5 deletions Lib/unittest/test/test_result.py
Original file line number Diff line number Diff line change
Expand Up @@ -816,7 +816,8 @@ def test_foo(self):
self.assertEqual(str(test_case), description)
self.assertIn('ValueError: bad cleanup2', formatted_exc)
self.assertNotIn('TypeError', formatted_exc)
self.assertIn(expected_out, formatted_exc)
self.assertIn('\nStdout:\nset up\ndo cleanup2\n', formatted_exc)
self.assertNotIn('\ndo cleanup1\n', formatted_exc)
test_case, formatted_exc = result.errors[1]
self.assertEqual(str(test_case), description)
self.assertIn('TypeError: bad cleanup1', formatted_exc)
Expand Down Expand Up @@ -847,13 +848,16 @@ def test_foo(self):
self.assertIn('ZeroDivisionError: division by zero', formatted_exc)
self.assertNotIn('ValueError', formatted_exc)
self.assertNotIn('TypeError', formatted_exc)
self.assertIn(expected_out, formatted_exc)
self.assertIn('\nStdout:\nset up\n', formatted_exc)
self.assertNotIn('\ndo cleanup2\n', formatted_exc)
self.assertNotIn('\ndo cleanup1\n', formatted_exc)
test_case, formatted_exc = result.errors[1]
self.assertEqual(str(test_case), description)
self.assertIn('ValueError: bad cleanup2', formatted_exc)
self.assertNotIn('ZeroDivisionError', formatted_exc)
self.assertNotIn('TypeError', formatted_exc)
self.assertIn(expected_out, formatted_exc)
self.assertIn('\nStdout:\nset up\ndo cleanup2\n', formatted_exc)
self.assertNotIn('\ndo cleanup1\n', formatted_exc)
test_case, formatted_exc = result.errors[2]
self.assertEqual(str(test_case), description)
self.assertIn('TypeError: bad cleanup1', formatted_exc)
Expand Down Expand Up @@ -887,13 +891,16 @@ def test_foo(self):
self.assertIn('ZeroDivisionError: division by zero', formatted_exc)
self.assertNotIn('ValueError', formatted_exc)
self.assertNotIn('TypeError', formatted_exc)
self.assertIn(expected_out, formatted_exc)
self.assertIn('\nStdout:\nset up\ntear down\n', formatted_exc)
self.assertNotIn('\ndo cleanup2\n', formatted_exc)
self.assertNotIn('\ndo cleanup1\n', formatted_exc)
test_case, formatted_exc = result.errors[1]
self.assertEqual(str(test_case), description)
self.assertIn('ValueError: bad cleanup2', formatted_exc)
self.assertNotIn('ZeroDivisionError', formatted_exc)
self.assertNotIn('TypeError', formatted_exc)
self.assertIn(expected_out, formatted_exc)
self.assertIn('\nStdout:\nset up\ntear down\ndo cleanup2\n', formatted_exc)
self.assertNotIn('\ndo cleanup1\n', formatted_exc)
test_case, formatted_exc = result.errors[2]
self.assertEqual(str(test_case), description)
self.assertIn('TypeError: bad cleanup1', formatted_exc)
Expand Down
14 changes: 9 additions & 5 deletions Lib/unittest/test/test_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,8 @@ def testNothing(self):
pass

test = TestableTest('testNothing')
outcome = test._outcome = _Outcome()
result = unittest.TestResult()
outcome = test._outcome = _Outcome(result=result)

CleanUpExc = Exception('foo')
exc2 = Exception('bar')
Expand All @@ -94,10 +95,13 @@ def cleanup2():
self.assertFalse(test.doCleanups())
self.assertFalse(outcome.success)

((_, (Type1, instance1, _)),
(_, (Type2, instance2, _))) = reversed(outcome.errors)
self.assertEqual((Type1, instance1), (Exception, CleanUpExc))
self.assertEqual((Type2, instance2), (Exception, exc2))
(_, msg2), (_, msg1) = result.errors
self.assertIn('in cleanup1', msg1)
self.assertIn('raise CleanUpExc', msg1)
self.assertIn('Exception: foo', msg1)
self.assertIn('in cleanup2', msg2)
self.assertIn('raise exc2', msg2)
self.assertIn('Exception: bar', msg2)

def testCleanupInRun(self):
blowUp = False
Expand Down
2 changes: 1 addition & 1 deletion Lib/unittest/test/test_skipping.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def tearDown(self):
result = LoggingResult(events)
test = Foo("test_skip_me")
self.assertIs(test.run(result), result)
self.assertEqual(events, ['startTest', 'addSkip', 'addFailure', 'stopTest'])
self.assertEqual(events, ['startTest', 'addFailure', 'addSkip', 'stopTest'])
self.assertEqual(result.skipped, [(test, "skip")])

def test_skipping_and_fail_in_cleanup(self):
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
:class:`unittest.TestResult` methods
:meth:`~unittest.TestResult.addFailure`,
:meth:`~unittest.TestResult.addError`, :meth:`~unittest.TestResult.addSkip`
and :meth:`~unittest.TestResult.addSubTest` are now called immediately after
raising an exception in test or finishing a subtest. Previously they were
called only after finishing the test clean up.