-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
tests: add support for writing a test using unittest
#16216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@stinos FYI, this might be interesting for you. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #16216 +/- ##
=======================================
Coverage 98.57% 98.57%
=======================================
Files 164 164
Lines 21349 21349
=======================================
Hits 21045 21045
Misses 304 304 ☔ View full report in Codecov by Sentry. |
1967a13
to
8007f84
Compare
I've also added a commit here to convert the |
Didn't check in detail yet but seems a welcome addition. I assume you chose this way (keeping run-tests for these tests as opposed to e.g. standard unittest discovery) so the existing tests can stay in the same directories as they are now, skipping based on platform etc still works?
This would make it impossible to use for us, mainly because we use
would support quite a lot of cases.
Is full output still printed? |
The main reason to keep Also, keeping
My main motivation for keeping the unittest detection simple was to keep things fast. Every test needs to be checked if it's a unittest test or not before it's run, and I didn't want to scan through the entire test file. But your suggestion to be a little more flexible/general sounds good to me. An alternative would be to just run the test on the target and check the output, if it ends in the unittest OK/FAILURE line then interpret it as a unittest test, otherwise it's a standard test that needs/has a .exp file. I might actually try doing that instead, see how it goes.
If the test passes then the full output is not printed nor saved. If the test fails then the full output is saved to the corresponding .py.out file. |
Yeah if it needs to run anyway this is probably the fastest as well. |
8007f84
to
eabcdbc
Compare
I've now pushed a commit to implement this and it's quite clean, it no longer needs to parse the test file to check for This way has the added benefit of not running tests under CPython if the target requests to skip them. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm very excited about this, I think it's going to unlock a lot more flexibility for testing!
There's one thing I noted while testing it out (inline below). I also have a couple of minor questions here, but I think those could be in follow-ups if needed at all.
One is about ergonomics: if unittest is not installed on the target, the runner tries to run the test under CPython and you get a failure like this:
--- /home/gus/ry/george/micropython/tests/results/net_hosted_connect_nonblock_xfer.py.exp 2024-11-20 09:58:50.383284085 +1100
+++ /home/gus/ry/george/micropython/tests/results/net_hosted_connect_nonblock_xfer.py.out 2024-11-20 09:58:50.383284085 +1100
@@ -1 +1,4 @@
-CPYTHON3 CRASH
\ No newline at end of file
+Traceback (most recent call last):
+ File "<stdin>", line 4, in <module>
+ImportError: no module named 'unittest'
+CRASH
\ No newline at end of file
run-tests.py could look for a line ImportError: no module named 'unittest'
as a special case and return a friendly error, or possibly even auto-run mip install unittest
and re-run the test?
The other question I have is about CPython: is there value in having a unittest
based test that runs under CPython as well and the test runner checks it passes on both? I don't know if there's a subset of tests where we're looking for CPython-compatible APIs (or partly CPython-compatible), but making the output 100% the same creates difficulty.
My feeling is that we can leave than one until/unless it comes up as something that would be useful to have, rather than building it now, but I thought I'd bring it up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is really quite clean, I can certainly see myself using this in future! Even just the ability to print the extra details on failure can be very helpful.
FWIW I have been planning on submitting a trimmed down reimplementation of pytest for micropython, but if that is completed and provides value here support for it (or any other test framwwork) could be easily added by just updating the results matcher block to look for the format for pytest as well as unittest.
tests/README.md
Outdated
and the output captured, and then run under MicroPython and the output compared to the | ||
CPython output. The test passes if the output matches exactly. If the output differs | ||
then the test fails and the outputs are saved in a `.exp` and a `.out` file | ||
respectively. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This paragraph about tests without exp might make more sense to move down lower now and rewritten slightly to reflect that they're run under cpython only if there's no unittest result returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'm moved the unittest paragraph to the top and adjusted the text.
b")$", | ||
output_mupy_lines[-1], | ||
) | ||
uses_unittest = unittest_ran_match and unittest_result_match |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In future it might be worth refactoring this block into a separate uses_unittest(output_mupy)
function to make it easier to add extra results parsers for any other test mechanism / framework, but I don't neccesarily think it's needed at this point. Better to wait until its needed and ensure the api makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll leave that for the future 😄
Would this help to provide more granularity to the feature detection system so it doesn't rely only on the test filename prefix/suffix? Right now the full At the moment I'm working on a port (WCH) targetting chips that may have as little flash as 128KBytes total. I haven't yet settled on the default feature set for the lower end chips so I literally started with |
One data point: have to jump through hoops here - #16272 (comment) - because CPython logs some warnings on interpreter shutdown. One possible path forward would be a unittest that we verify passes on both CPython and MicroPython. |
This is something of an aside, but is it reasonable to split these tests up and do "feature detection" at the top of some with a |
Right now I'm contemplating adding a tuple of integers to This could, in theory, provide the SKIP behaviour you described, although to keep the amount of changes down I'd use this to alter the test behaviour at runtime instead of splitting tests. An issue that arises from this method is that if you fully bypass a test, you should return the value that the test harness expects. If this value is dependent on the CPython version you run this on, then that becomes a problem though. What could go wrong with this approach, besides the added complexity (and build/test run time overhead)? :) |
eabcdbc
to
a26aea5
Compare
Now done.
The idea being, if you try to run tests that need unittest then that's a "user error" that needs to be rectified (either don't run those tests, or install unittest).
I think this is hard to do right now because you need to distinguish between tests that use |
Code size report:
|
@agatti your comments about skipping tests are noted, and I agree it would be good to solve them, but I think out of scope for this PR. Ideally we would run tests against the unix minimal variant under CI... if that worked it would definitely help your case but we just didn't get around to properly supporting minimal builds in the test suite (although it's a good goal). |
Apart from squashing commits (I left them for easier review), this should be good to go now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, this will be really nice to have!
tests/run-tests.py
Outdated
uses_unittest = False | ||
output_mupy_lines = output_mupy.splitlines() | ||
if any( | ||
line == b"ImportError: no module named 'unittest'" for line in output_mupy_lines[:3] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumes there's no other output before the test tries to import unittest, which isn't necessarily the case. Minor issue though since the test will still fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a test uses unittest
it's very unlikely it will print anything before import unittest
. The pattern I've been using is:
- try-import modules that you need for the test, and SKIP if they don't exist
- import unittest (this is done after checking if the test should be SKIPed so you don't need unittest if you won't anyway run the test)
- possibly define some state, helper functions, etc
- define the main test class(es)
- use the standard
if __name__ == "__main__": unittest.main()
code at the end
That said, I could change the code here to check the last few lines (really it only needs to check the last line) of the output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a test uses unittest it's very unlikely it will print anything before import unittest
I agree that it is unlikely, and also it seems to be a bit not done to output during import phase but I do have such tests :)
That said, I could change the code here to check the last few lines (really it only needs to check the last line) of the output?
Seems very sensible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, now updated to check the last 3 lines of the output.
elif ( | ||
len(output_mupy_lines) > 4 | ||
and output_mupy_lines[-4] == b"-" * 70 | ||
and output_mupy_lines[-2] == b"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this even needed, i.e. won't the b"-" * 70
be sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not needed, but I wanted to be strict here with the pattern matching, in case a non-unittest test outputs similar strings.
Did some basic tests and apart from the threadpool issue this works the way it should. |
dea0c41
to
8c4a65f
Compare
All the existing tests require a .exp file (either manually specified or generated running the test first under CPython) that is used to check the output of running the test under MicroPython. The test passes if the output matches the expected output exactly. This has worked very well for a long time now. But some of the newer hardware tests (eg UART, SPI, PWM) don't really fit this model, for the following main reasons: - Some but not all parts of the test should be skipped on certain hardware targets. With the expected-output approach, skipping tests is either all or nothing. - It's often useful to output diagnostics as part of the test, which should not affect the result of the test (eg the diagnostics change from run to run, like timing values, or from target to target). - Sometimes a test will do a complex check and then print False/True if it passed or not, which obscures the actual test result. To improve upon this, this commit adds support to `run-tests.py` for a test to use `unittest`. It detects this by looking at the end of the output after running the test, looking for the test summary printed by `unittest` (or an error message saying `unittest` was not found). If the test uses `unittest` then it should not have a .exp file, and it's not run under CPython. A `unittest` based test passes or fails based on the summary printed by `unittest`. Note that (as long as `unittest` is installed on the target) the tests are still fully independent and you can still run them without `run-tests.py`: you just run it as usual, eg `mpremote run <test.py>`. This is very useful when creating and debugging tests. Note also that the standard test suite testing Python semantics (eg everything in `tests/basics/`) will probably never use unittest. Only more advanced tests will, and ones that are not runnable under CPython. Signed-off-by: Damien George <damien@micropython.org>
So that a failing unittest-based test has its entire log printed when using `run-tests.py --print-failures`. Signed-off-by: Damien George <damien@micropython.org>
This brings in: - umqtt.simple: add optional socket timeout to connect method - aioespnow,webrepl: use recommended network.WLAN.IF_[AP|STA] constants - unittest: allow SkipTest to work within a subTest - unittest: always use "raise" with an argument Signed-off-by: Damien George <damien@micropython.org>
This allows `unittest` to work on esp8266 boards. Signed-off-by: Damien George <damien@micropython.org>
So that this port can run unittest-based tests. Signed-off-by: Damien George <damien@micropython.org>
Signed-off-by: Damien George <damien@micropython.org>
This allows it to run parts of the test on esp8266 (or any target using axTLS). Signed-off-by: Damien George <damien@micropython.org>
Signed-off-by: Damien George <damien@micropython.org>
8c4a65f
to
2c80d36
Compare
Summary
All the existing tests require a
.exp
file (either manually specified or generated running the test first under CPython) that is used to check the output of running the test under MicroPython. The test passes if the output matches the expected output exactly.This has worked very well for a long time now. But some of the newer hardware tests (eg UART, SPI, PWM #16147) don't really fit this model, for the following main reasons:
To improve upon this, this PR adds support to
./run-tests.py
for a test to useunittest
. It detects this by scanning for animport unittest
at the top of the file (it must be the first non-comment/non-blank line). If the test usesunittest
then it should not have a.exp
file, and it's not run under CPython. Instead it's run under MicroPython only and the test runner looks for the unittest output at the end of the test, either "OK" or "FAILED". It uses that to tell if the test passed or failed.Note that (as long as
unittest
is installed on the target) the tests are still fully independent and you can still run them withoutrun-tests.py
: you just run it as usual, egmpremote run <test.py>
. This is very useful when creating and debugging tests.Note that the standard test suite testing Python semantics (eg everything in
tests/basics/
) will probably never useunittest
. Only more advanced tests will.Testing
One of the stm32-specific tests has been updated to use
unittest
. It was tested on a PYBD-SF2, and passes.The esp8266 port has a change to enable function attributes so
unittest
can run on it. I did test #16147 modified to useunittest
and ran it on an esp8266 and it worked. This shows that esp8266 has enough resources to rununittest
.Note: I deliberately used bad indentation in this updated test so it's easier to see the required changes to convert the test to use
unittest
. Proper formatting will be applied once this PR is finalised and approved.Trade-offs and Alternatives
Tests using
unittest
will require first installing it (only needs to be done once), and also need enough resources (RAM). But usually the tests that use it will be more complex ones that already have certain resource requirements.We could continue to use .exp output comparison, but that is quite limiting.
We could use
pytest
but MicroPython doesn't support it yet.