-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
esp32/mpconfigport: MICROPY_EVENT_POLL_HOOK fails to yield #5994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
One option is to modify Another option for esp32 is to not use Changing the tick rate to 1kHz is the simplest option, but it's still going to suffer from the same problems, just at a 1ms resolution instead of 10ms. Eg adding a yield in |
Thanks for the food for thought. For the short term, would you entertain a PR that sets the tick rate to 1kHz and makes sure that more than one packet per ms can be sent/received? If so, I'd look into that. For the long term, you suggest perhaps using the IDF poll and to perhaps hook in custom events. I have to admit I was hoping we could go the other way, which is to avoid the poll stuff (unless explicitly used by the app) and move to a more event driven system where the MP task can go to sleep, just like a WFI, and be awoken by interrupt handlers or other events. I haven't looked into the new event stuff in the IDF but for sockets there's always the possibility of having a separate task doing the poll calls and converting the outcome into events. (I mean "events" in the very general sense as something that sets some flag and ensures the MP task is running.) |
Yes. I'm not certain about the lag in packets, but it'd need to be tested (see eg uiperf3.py).
Yes this is also a good idea. Basically implementing a custom |
…s-to-classes Change reference of "libraries" to "classes
On the STM32
MICROPY_EVENT_POLL_HOOK
contains either apyb_thread_yield()
or a__WFI()
causing the flow of execution to pause for a tad and thereby give other threads a chance at the cpu or provide an opportunity to save power. On the esp32MICROPY_EVENT_POLL_HOOK
does not contain any form of yield (https://github.com/micropython/micropython/blob/master/ports/esp32/mpconfigport.h#L245-L252). Instead,mp_hal_delay_ms
both callsMICROPY_EVENT_POLL_HOOK
and yields (https://github.com/micropython/micropython/blob/master/ports/esp32/mphalport.c#L164-L165) andmp_hal_stdin_rx_chr
also calls both. However, in modselectpoll_poll_internal
just has theMICROPY_EVENT_POLL_HOOK
macro (https://github.com/micropython/micropython/blob/master/extmod/moduselect.c#L256) with the result that on the esp32 it busy-waits, hogging the cpu, while on stm32 it yields as expected.It's pretty easy to fix the esp32 situation and that's actually something included in #5473 (https://github.com/micropython/micropython/pull/5473/files#diff-b9499fc8ad5b9793822626f6befdb1b6 and https://github.com/micropython/micropython/pull/5473/files#diff-4c3d68ff23336da03f336fbc26571f7b). But that's not the end of the story...
Due to the esp32's 100hz clock tick moving the yield into
MICROPY_EVENT_POLL_HOOK
results in a 10ms granularity when a routine usesMICROPY_EVENT_POLL_HOOK
.extmod/uasyncio_basic.py
works around this by including code that checks "is the remaining sleep duration shorter than a 10ms clock tick? if yes, then busy-wait". However,poll_poll_internal
doesn't (and really shouldn't since it's not port-specific). The next thing that happens is that in the test suiteextmod/uasyncio_basic.py
verifies that sleeping in uasyncio for 20ms and 40ms has the desired effect and it turns out it doesn't on the esp32, rather, the sleep times are often 30ms and 50ms. This over-sleeping happens because asyncio'sIOQueue
uses ipoll to sleep, which now gets 10ms long yields inserted and and mentioned above doesn't compensate for that. So long story short, fixing the not-yielding issue causes over-sleeping when using uasyncio.IMHO the best fix is to increase the clock tick rate on the esp32 to 1kHz, which would also help reduce latency when waiting to receive a packet but I'm happy to learn about other alternatives!
The text was updated successfully, but these errors were encountered: