-
-
Notifications
You must be signed in to change notification settings - Fork 32.1k
[Windows] test_int.test_denial_of_service failed: took 15ms #114911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
A few of our tests measure the time of CPU-bound operation, mainly to avoid quadratic or worse behaviour. Add a helper to ignore GC and time spent in other processes.
The problem here is that time.monotonic() has a resolution of 15.6 ms on Windows. It can return 0 or 15.6 ms for short timing, nothing in between. I added "CLOCK_RES" to some tests:
|
I don't know if it's the root issue or not. I will try to reproduce to debug the issue on Windows. |
Hm, this is using |
Happened again: #115504 (comment) |
A few of our tests measure the time of CPU-bound operation, mainly to avoid quadratic or worse behaviour. Add a helper to ignore GC and time spent in other processes.
A few of our tests measure the time of CPU-bound operation, mainly to avoid quadratic or worse behaviour. Add a helper to ignore GC and time spent in other processes.
A few of our tests measure the time of CPU-bound operation, mainly to avoid quadratic or worse behaviour. Add a helper to ignore GC and time spent in other processes.
A few of our tests measure the time of CPU-bound operation, mainly to avoid quadratic or worse behaviour. Add a helper to ignore GC and time spent in other processes.
@terryjreedy, do you see this on a machine where you could test a fix? |
I observed 2 failures in about 18 tries on installed 3.13.0 no-debug gil. I could hand-patch (copy-patch) a bit of test_int. On fairly fresh local debug no-gil build I got 0 failures in 40 tries. Easy to patch from PR, but need something that fails to test fix. |
I changed time.monotonic() in Python 3.13 to use QueryPerformanceCounter(). It now has a resolution better than 1 microsecond. |
Like @eryksun mentioned there, the "real" resolution is 15.625 ms by default, hence the On my PC
Exactly this happens in very rare cases for sw_fail_huge.seconds or sw_fail_extra_huge.seconds even though they are only 2 to 4 microseconds wall time on my PC. Almost always this results in If we fix the reported resolution of
This would fix Windows and other OSs having such poor resolution for
But then |
Can I open a PR using Or at least for Windows, where the resolution is just too bad to use it? |
The only other user of Lines 2474 to 2485 in 7ae9c5d
where according to the comment time.monotonic would do its job, too.
|
IMHO, wall time is a better fit, because https://docs.python.org/3/library/time.html#time.process_time cpython/Lib/test/support/__init__.py Lines 2587 to 2588 in 7ae9c5d
|
AFAIK the test suite doesn't have a threaded mode, so this should not matter much? It definitely would be better if it didn't include time spent in other threads, but, at least it doesn't include time spent in other processes.
Go ahead. You could also rename it to Stopwatch. That way we can see the change on buildbots. |
Yeah, I fully agree: in theory, CPU time would be the better fit, it's just that on Windows the resolution is too poor. cpython/Lib/test/support/__init__.py Lines 2602 to 2604 in 7ae9c5d
we already use time.monotonic, i.e. not process_time .So let's switch everywhere to time.monotonic - or perf_counter: the docs do not mention that it might not be available everywhere, but rather "a clock with the highest available resolution" :)
|
That makes me think: should this continue using |
IMHO it is better to use the same clock for all platforms. Otherwise, we might end up with an |
Sounds reasonable. |
Co-authored-by: Petr Viktorin <encukou@gmail.com> Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Thanks for the PR! |
I just saw a failure on Windows:
edit: and Linux
|
Yeah, unfortunately, it didn't help. Have seen some Windows bots failing, too, but you've also spotted a Linux one. If I am not mistaken, those never failed so far? In isolation, I couldn't get it to fail anymore on Windows, but on loaded hosts process times are indeed better like you've already anticipated. This #125076 already dealt with it, too, and maybe we should go with the suggestion from @gpshead
I suggest to
|
Yeah, it might be better to skip it on platforms that don't have a reliable high-resolution per-thread CPU time counter. Which is... all of them. So, give up and remove the test? I'm not aware of any other part of the codebase that's tested for accidental quadratic behaviour. |
Just recently, in #134874 a test was requested. There, just Here, the implementation was optimized a lot but the initial test left untouched AFAICT, and so we dropped below the resolution on Windows. We could increase Line 629 in 28d91d0
or just
|
We have some tests, mainly for ReDoS: cpython/Lib/test/test_http_cookiejar.py Line 189 in 28d91d0
cpython/Lib/test/test_http_cookiejar.py Line 128 in 28d91d0
cpython/Lib/test/test_poplib.py Line 356 in 28d91d0
cpython/Lib/test/test_difflib.py Line 530 in 28d91d0
And once I find a way to fix the issues in |
And the interesting thing for all those tests above: they do not measure the time :) |
Yes, we just check that we're not spending hours because in general that's what it does otherwise. Unfortunately, I don't know if this would be the case for |
Uh oh!
There was an error while loading. Please reload this page.
test_int
recently failed on a buildbot:I don't know for sure, but my guess is that a GC collection got triggered at the wrong time.
I'd like to add a test helper for timing CPU-bound tasks. It should disable GC and use
process_time
; perhaps it can do other tricks in the future too.Linked PRs
The text was updated successfully, but these errors were encountered: