-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
extmod: implement basic "uevent" module (RFC, WIP) #6125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
5aaa0e2
to
4ef1643
Compare
@andrewleech you may want to try testing this with your use of uasyncio. This is merge-ready and should work... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Phew, not easy to grok ;-)
I need to go back to the previous PR to understand why you're leaving the juicy stuff out...
I'm struggling with the ioctl's being built-in here. I understand that this PR is just one step and things will change further. I'm also trying to understand the abstractions... The most confusing is what does the object registered with a poller have to satisfy in terms of properties and interface?
In all cases, I believe it must satisfy some notion of identity such that registering the same "thing" twice only does it once (there's an implicit assumption in the self->entries[i].base.obj == args[1]
check in register()).
But the more interesting part is how it must/may signal an event. I believe there are three cases:
- each object has or provides a poll() method that can be called to busy-poll the object (this is what moduevent_native implements with the poll() method being hard-coded to ioctl and its peculiar args)
- all objects are managed by a port-provided "default" poller function that is used to poll all objects at once (this is what moduevent_poll implements with the system poll() being that default function)
- each object spontaneously (e.g. via interrupt) signals events (no implementation of that yet)
I wonder whether there is value in writing generic implementations of these three cases and having port-defined macros where the specific functions need to be called instead of having each port write its own version. Maybe that ends up being too unwieldy.
I think overall what I like the least is the fact that there are still two nested scheduling loops: the uasyncio one and the poll_ms one. It would be much better IMHO to start consolidating these by allowing poll_ms to return an "empty iterator" so the caller has a chance to check everything it needs to.
NB: I've resolved a bunch of the questions posed in my comments in the code, I'm leaving them because it may help you decide where some comments could speed up the understanding of future reviewers.
|
||
.. module:: uevent | ||
:synopsis: wait for events on a set of objects | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth some intro?
The main purpose of the uevent module is to enable fully-event driven operation using uasyncio such that whenever any form of event needs to be awaited it can be done by pausing the processor until the event occurs. The benefits are that the processor and other parts of the system can be put to sleep to save power and that an event, when it comes in, gets handled immediately without having to wait for a polling loop to come around.
The uevent functionality is loosely patterned after the select
and selectors
modules with the major distinction that it is not tied to file descriptors or stream I/O. Instead, an event is an abstract concept represented by a ... [oops, can't continue until I actually understand some more...]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that sounds pretty good. Will add.
Methods | ||
~~~~~~~ | ||
|
||
.. method:: poll.register(obj[, eventmask]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is obj
? I assume it has to have a certain interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends on the port. On unix it needs to respond to MP_STREAM_GET_FILENO
. On bare-metal in the current code it must implement the MP_STREAM_POLL
ioctl. But when it moves to an event-based system the obj needs to respond to MP_STREAM_SET_EVENT_CALLBACK
(or whatever i'll be called).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can some of your reply be captured in this doc or perhaps in the code since the interface described in the doc is still in flux?
self.data[idx] = None | ||
self.unregister(1 << idx) | ||
|
||
def _enqueue(self, s, idx): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is idx?
rename to flag_idx
?
// flag_idx is an index into the flag bits
??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's just the log2 of the event bit, so it's possible to index into the data
list (I wanted to use small names here to reduce code size, because they allocate a qstr in the frozen code).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then perhaps:
// idx: log2 of event bit
(when I first read this I wasn't sure whether this was the file descriptor index or what...)
@@ -1315,6 +1315,14 @@ typedef double mp_float_t; | |||
#define MICROPY_PY_UCTYPES_NATIVE_C_TYPES (1) | |||
#endif | |||
|
|||
// Whether to provide the "uevent" module, and which implementation to use | |||
#define MICROPY_PY_UEVENT_IMPL_NONE (0) | |||
#define MICROPY_PY_UEVENT_IMPL_NATIVE (1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "Native" mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That there's no underlying OS, so it could be MICROPY_PY_UEVENT_IMPL_NOOS
or MICROPY_PY_UEVENT_IMPL_BARE
.
struct _mp_obj_poll_t { | ||
mp_obj_base_t base; | ||
unsigned short alloc; | ||
unsigned short len; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// number of entries allocated
?
// number of entries filled
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
mp_obj_base_t base; | ||
mp_obj_poll_t *poller; | ||
mp_obj_t obj; | ||
uint16_t user_flags; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// copy of ready_flags: set of events on this object being signaled to the user
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ready_flags
doesn't exist on unix, it's more like "set of events ready on this object since the last time the user got them by iterating the poller"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha. Mind adding that as a comment?
def _enqueue(self, s, idx): | ||
entry = self.poll.register(s, 1 << idx) | ||
if entry.data is None: | ||
entry.data = [None, None, None] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// entry.data is used to hold the tasks waiting for reading, writing, and error respectively
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there are no errors in uevent.... anything that has an error is readable and writable. The 3rd entry is for a generic event (eg pin edge).
For something like a UART you need all 3: read, write and events like RXIDLE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aha! then adding a (corrected) comment would be doubly helpful;-)
# print('(poll {})'.format(dt), len(_io_queue.map)) | ||
_io_queue.wait_io_event(dt) | ||
# print('(poll_ms {})'.format(dt)) | ||
_io_queue.poll_ms(dt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up on the above comment about moving mp_handle_pending or MICROPY_EVENT_POLL_HOOK here, doing so would provide more flexibility. I forget what the ph_key of a ready task looks like, so it may require an additional check when recalculating dt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See reply above about that. This call should sleep as long as possible.
mp_obj_t obj; | ||
uint16_t user_flags; | ||
uint16_t user_method_name; // a qstr | ||
mp_obj_t user_method_obj; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
user_method_name and _obj seem unused? Maybe I missed something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are used by mp_uevent_poll_entry_attr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that has getters and setters, but I don't see them being actually used anywhere. They were used in #6110. Maybe I'm overlooking something.
} | ||
mp_obj_poll_entry_t *entry = NULL; | ||
for (size_t i = 0; i < self->len; ++i) { | ||
if (self->entries[i].base.obj == args[1]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's an implicit assumption here that this equality test is sufficient. for file descriptors it is. But looking ahead, it assumes that if I register Pin(12)
that this is a true singleton object such that if I call Pin(12)
in another part of the app and also register that the equality check is sufficient. Ughh, I hope this makes sense. I don't think this is necessarily incorrect or bad code, but it might be useful to flag this assumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it's essentially an "is" comparison in Python language, which should be enough. A port should ensure that Pin(12)
here and there is the same object.
But the idea is to make this much more efficient and query the object to see if it's already registered, see if it already has its event callback set to the correct value (see #6110). This will be a more correct test because it relates to the event itself rather than the object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// python "is" comparison on the object assumed to be sufficient for now
I wanted to take it one step at a time. It's too error prone to make large sweeping changes. If the direction here is agreed upon I can add some more commits here implementing the additional event-based items. Then this PR would basically be a cleaned up version of #6110.
As mentioned above, it depends on the port: either it's a file descriptor, or something that responds to
I factored out all the common code (so far) into
I wanted to keep |
I think it would be really helpful to spell out the semantics of and requirements on uevent in plain english. One red flag to me is that MICROPY_EVENT_POLL_HOOK shows up in one poll_ms and mp_handle_pending shows up in the other. The macro in particular is a convenient way to sweep unresolved ugly stuff under the rug ;-). If uevent is the new way to wait for events and it's port-specific then why does it have to call additional escape hatch stuff? My gripe here really is that to arrive at a clean system the semantics need to be clear and the use of MICROPY_EVENT_POLL_HOOK pulls in a lot of implicit assumptions and unknown unknowns... Maybe using the macro is the right thing to do in the end but right now it seems muddling. The issues I'm referring to are: what type of stuff needs to be polled or checked outside of what uevent already polls or checks? If there is such stuff then how does it interface with uevent, e.g., implicitly having "other polled stuff" says that there are some form of events outside of uevent: why is this necessary and if they are how are they supposed to "collaborate" with uevent, for example to cause uevent to return? I think that spending some time on making all this explicit and cleanly described is worth it in the long run as it will lead to cleaner code. An example that sends my head spinning: on the esp32 MICROPY_EVENT_POLL_HOOK checks for events; but on STM32 MICROPY_EVENT_POLL_HOOK actually suspends (thread yield or WFI). As a result poll_ms on stm32 doesn't sleep itself but uses the MICROPY_EVENT_POLL_HOOK macro while poll_ms on unix doesn't use the macro and sleeps itself by calling the poll system call. MICROPY_EVENT_POLL_HOOK of course is used in other places in the codebase... WOuldn't it be nice to clean these things up a bit when it comes to uevent, even if that means duplicating some code for now? Let me try to formulate some specs for uevent. More precisely, specs for the union of all uevent implementations, setting aside whether there should be a generic implementation of uevent or a separate one per port and whether some implementations only support a subset of functionality.
There's also mp_handle_pending... Right now it seems there some implicit contract between the mp_sched stuff and uevent's poll_ms. Kind'a "if you iterate then run mp_handle_pending" and "oh, by the way, the mp_sched stuff probably ensures that you somehow iterate when it needs you to run mp_handle_pending (for example thanks to EINTR)". IMHO it would be very useful to discuss what mp_sched really is and what its semantics should be given all these changes. It seems to be "run this piece of code on the next available thread/task that runs the MP interpreter at a bytecode boundary". So in a threaded context should it wake up all blocked/sleeping MP threads? One of them (which)? Or should there be a separate MP thread just for mp_sched? (Such a thread could have high priority thereby reducing latency, e.g. on esp32.) If a wake-up is needed, shouldn't mp_sched call into uevent, possibly a "wake_up_for_mp_sched()" function, and let uevent encode how that's supposed to be implemented (even if it's a no-op)? As opposed to mp_sched doing something that indirectly/implicitly happens to wake up uevent? Ugh, there I go again producing a wall of text... I hope that what comes across is that I really value a clear statement of semantics/requirements/assumptions/etc. (doesn't need to be formal). It helps the next person understand what goes on and enables them to make changes or additions. It also helps produce clean code and fewer bugs. One of the benefits I could see coming out of this is that it could allow most things scheduling get concentrated inside uevent so it can be reasoned about within that box, as opposed to having bits and pieces in a (growing) number of places in the codebase. |
It's quite possible that someone might already have an Perhaps this should be named |
I'll try to clear up this bit first, because I think it gets to the crux of the issue. There are currently 3 levels of execution in uPy:
It's the VM/runtime's job to manage these 3 levels of execution transparently to the user, and to the other levels. Ie the top-level doesn't need to manage execution of pending soft interrupts. Hard interrupts can spawn soft interrupt callbacks via Communication between levels (usually top level to soft/hard interrupt) can be done by using Python variables, eg setting a flag in the interrupt and testing it at the top level. Soft/hard interrupts are, pretty much exclusively, triggered by an external event, eg UART incoming character, pin edge change, timer expiring, BLE event. On unix it would be a signal, eg SIGPIPE. These events need to be processed independently of whatever the main top-level code is doing. Coming to So in essence there are actually 2 layers of "event" sub-systems:
I don't think it makes sense to try and merge these 2 layers, ie for the "user layer" to be responsible for executing soft callbacks from the "transparent layer". I also don't think it makes sense for the "transparent layer" to implicitly wake the "user layer" (explicitly, yes, eg via |
Please could you clarify how these statements align.
From the docs Using micropython.schedule:
A MicroPython instruction can map onto many bytecodes: how does the VM ensure that the consequence of preempting at the bytecode level is invisible at the Python level? Why does this not (#6106) automatically imply that Event.set() can safely be called from a soft ISR? |
Damien, thanks for the long reply! I've also wondered about the question Peter has asked, I believe soft interrupts are not checked strictly after every bytecode but only after ones that occur at the end of a statement, or something like that. But I could be very wrong!
I didn't mean to suggest this. I'll rephrase what I'm trying to do by saying that I'm trying to make the communication between the transparent layer and the user layer explicit and to make it clear how blocking and unblocking occur and whose responsibility is what. Besides simply overall clarifying what is happening I believe this is necessary to go to sleep because sleep is a global decision that affects everything at all layers and so global knowledge is necessary in some shape or form (very abstractly speaking). I believe that something important missing from the scheduling description: threads (and tasks). Are the following statements correct in the presence of python threads?
About causing a python thread to run when a soft-IRQ is scheduled, are the following statements correct? Note that to a large degree the issues below are not relevant today because of busy polling, these bullet points are not about possible issues in the code today but about what needs to be solved going forward.
Let me try a couple more statements related to sleep (processor stops running instructions):
Another wall of text. My head is spinning :-). My overall suggestion is to make all of the above explicit in the code and to think about which layer is responsible for implementing what and have a clearly named function (or macro) for the purpose. Some of these functions/macros may be empty (or #-defined away for efficiency). NB: I realize a lot of the above applies more to #6110 than this PR and is definitely forward-looking. I'm just continuing here 'cause we started here and I hope that at the end the information can be captured and put into the MP implementation notes or where appropriate. |
As I understand it, bytecode granularity ≠ Python statement granularity. So Damien, you say:
There's perhaps a slightly different way to look at this. The CPU is an opcode-devouring loop, and the VM is a bytecode-devouring loop. For any kind of context change, these loops needs to be suspended in some way. W.r.t. hardware interrupts, the VM (nor any other part of the system) can not really do anything, they happen anyway. All a h/w IRQ can do is signal the software in some atomically safe fashion. At the gate level, every h/w IRQ is effectively picked up by the fetch/execute cycle in the CPU. Likewise, a soft IRQ happens (usually triggered by a h/w IRQ - or even always?) and then the "soft CPU", i.e. the VM, picks these soft IRQs up at its fetch/execute cycle, i.e. VM loop. Nothing new, but how you deal with this has major implications IMO for stacking of contexts (both C and Python) and for the information available to "schedule" low-power mode changes. If low-power scheduling is to become a major feature in MicroPython, then I think it makes sense to treat that goal as the central decision point for everything else. A power/speed/interrupt manager of some yet-to-be defined kind could "see" all the change-of-control path activity on a µC, from hard/soft interrupts, to timing queues, to the VM loop doing its thing. And not only see this, but plan and predict what's likely to happen and to be needed next. If I know there are no timeouts for the next 100 ms, and I've seen no UART interrupts, and no actually data arriving over USB, then I can decide to switch to the RTC clock, put the µC in a deep sleep mode which is still able to wake-up for the stuff that matters, and reduce the power consumption dramatically further than in plain WFI/WFE (on ARM) sleep mode. MP is not there at the moment. But the With threads, I don't think this situation becomes drastically different. Unless all the sleeping and interrupt handling is done by the underlying RTOS anyway. Then all of MP becomes a slave task, with occasional soft interrupts fed into it. Deep sleep, at the very lowest level, comes back as a full reset. That's basically the same as turning the chip off and powering it up again. It's also the mode with the highest possible power savings (well sub-µA). I'd love to see that work well with MicroPython. And here too, there is a benefit to adding a small amount of logic around the VM: the VM is done for whatever reason, it returns, leaving the complete C and Python context in a state which can be resumed. At which point this power/speed/interrupt manager thing decides it wants to go all out, saves state in a power-cycle resilient way (eeprom, backup sram, whatever), and turns the chip off, with care to get the wake-up logic right, so that it will get back control on power-up. It's a bit like suspend-to-disk, but with very different options in an embedded µC environment. Anyway. If |
Good questions... both of the statements are true. By "update of Python objects" it means things like list/dict insertion/deletion are atomic. But certainly For |
Yes technically they are only checked after a jump/jump-if instruction, but you should code as though they could happen between any opcode, because maybe one day the VM will be changed so they are checked more often (to reduce soft-interrupt latency).
The behaviour in the code is a bit undefined/ambiguous at the moment. First, one should really make the distinction between GIL and no GIL (uPy can work without a GIL because it doesn't have reference counting). With a GIL it pretty much reduces to the single-threaded case because threads can only execute sequentially, and the soft-irq must execute with the GIL. So, yes, a soft-irq is executed in the current thread's context, at the next (jump) opcode boundary. Without the GIL there are other locks (qstr and gc) which must be obtained by any running thread or soft-irq. Actually, in a threaded system without the GIL there are no guarantees on atomic Python operations because threads can be truly running in parallel. So soft-irqs also have no guarantees on atomic operations like dict/list updates. In this case you need to explicitly manage any communication between threads (and soft-irq <-> thread) using locks from Well, if we wanted very low-latency soft-callbacks, it could be that this no-GIL behaviour be extended to non-threaded systems. That is, soft-callbacks have no guarantees on atomic operations, or execution at opcode boundaries. Instead the only guarantees are: qstr and GC operations are atomic, because they are global operations that update global state. And soft-irq callbacks can still allocate heap memory (because GC is atomic). Everything else, all other data structures, must be managed through explicit locks by the user. Changing the behaviour to this would be a discussion for elsewhere (and I'm not suggesting to do it at this point...).
Correct.
Correct.
Correct. The soft-irq sub-system/machinery acts transparently to uevent, ie uevent does not know about it. And, yes, if a soft-irq callback triggers an event that uevent is waiting on, then that triggering must be explicit (and could be an anonymous trigger or a named one; in the latter case uevent would know what tasks to schedule as runnable).
I'm not sure I fully understand this point/statement. But in the original idea with This might not be the best way to do it. As you say, it's probably good to disallow making a task runnable without uevent. That is, all notifications to uevent must be "named" so that uevent knows what tasks to make runnable when it wakes up. Eg when you register an object with uevent it passes that object a "key" which it can use to notify the poller/uevent that the event triggered, and then all tasks associated with that "key" should be woken/made runnable.
Yes. But they can use the same approach to signal uevent because it will be hard-irq safe. |
It's not always in the VM (the part that executes bytecode opcodes)... it could be in a blocking call like The VM itself knows nothing about WFI, sleeping or blocking. Only modules like
Aren't signals always executed on user threads, not kernel threads?? But, yes, any blocking kernel call made by the runtime will be interrupted by EINTR (due to a signal) and then the runtime has a chance to process the soft-irq that was scheduled by the signal handler.
As long as the soft-irq callback is executed by some thread it doesn't matter which one (because the top-level Python code shouldn't know about execution of soft-irq callbacks). There's only a problem if no thread executes the callback (which may be a problem in the current unix port...).
Yes, there could be issues related to this. But it depends on whether the GIL is used or not. If it is used, then if the hard-irq executes when the GIL is released then the hard-irq can just acquire the GIL and run the soft-irq callback there and then. If the GIL cannot be acquired then there must be a VM task that is currently running, which can take care of running the schedule soft-irq callback.
Yes, multicore needs more thought in this regard. |
Correct. Although there may be other IRQs which are registered outside MicroPython, like low-level systick, USB or UART rx buffer handlers. It cannot go to sleep if these are active and cannot wake the system from sleep (they can all wake it from WFI but not all from lightsleep mode). The system must keep track of which IRQs are active and what level of sleep can be achieved at any given point.
Correct. But on stm32 if nothing is runnable then it does do a WFI, see
Correct.
Yes that sounds about right... but this is actually implemented on esp32 using
Correct. |
Thanks for all the clarifications! I had not drilled down to
Kernel threads are entities that are scheduled by the kernel (can be thought of as light weight processes that all share the same address space). User threads are entities that run in the context of a kernel thread and that are scheduled by a user-level thread library (user threads are multiplexed onto a kernel thread). Phtreads is an interface that can be implemented by using a kernel thread for each pthread, or by using a single kernel thread and implementing all pthreads at user-level, or by a combination of both (N pthreads using N user threads that multiplexed onto M kernel threads). The reason this is relevant here is that we're talking about unblocking kernel operations. The tricky situation is the following (I'm not 100% sure I got all the details right):
What this highlights is that one either has to really understand the precise semantics of the poll system call and EINTR in the context of pthreads (since that's what MP uses) or one has to always assume the worst-case and use a solution like the socketpairs to force thread A in the example above to unblock. |
This makes it hard to reason about the behaviour of scheduled functions. For example a forum user asked if it is safe to modify a There is clearly a tradeoff between latency and usability. Soft IRQ's clearly need to be as fast as possible and it's reasonable to expect users to take precautions. In the case of micropython.schedule I would suggest that pre-emption at a Python instruction boundary would improve usability. An explicit call to micropython.schedule implies an acceptance of latency. In many cases (e.g. uasyncio) the latency is unlikely to be significant. This could be taken further if we had micropython.critical(set) which allowed the declaration of critical sections. These would preclude pre-emption by scheduled functions (but would be ignored by ISR's). This would fix the Event.set problem. |
Yes this is a longer term goal.
Yes this describes well what I'm aiming for. It'll depend heavily though on the MCU/SoC and it's sleep support, and also whether there's an RTOS in there.
That would be a fancy feature but requires a lot of management of the SoC, eg saving the state of all peripherals. |
Yes, I think using socketpair is the way to go in a multi-threaded POSIX-like system. For example, you may have 2 independent |
Code in scheduled functions can never be interrupted by another soft-scheduled callback. So you can do So it's definitely safe to modify a I'm not sure what the best thing is to do here regarding docs and specs... maybe we can define a set of operations/data structures that are always guaranteed to be atomic, and list them in the docs?
That's actually kind of how it's implemented at the moment: scheduled functions are only checked for execution at a jump opcode, which means most expressions are atomic. But I'm not sure it's a good idea to rely on this, or make it part of the spec/standard, because that locks us in to this implementation and forbids potential optimisations in the future.
In PRs #6106 and #6110 I added with micropython.scheduler_locked():
... |
There is no such thing as a python instruction. You can talk about statements, or simple (non-compound) statements, or expressions, or expression atoms, etc. Pretty much all of these can have function calls and the moment you call another python function all bets are off. So I don't think you can meaningfully define something at the python level to be atomic.
I believe this is the only sane solution to the issue, and yes, a context manager would be nice! |
@dpgeorge @tve Points taken, thank you. I agree about the context manager. Should nested calls be considered (or disallowed)?
This seems to be the only option, along with documenting the context manager and the reasons for using it. |
Yes nesting of scheduler lock/unlock should be allowed, and it currently is because the underlying |
Start of USB host API
This is an automated heads-up that we've just merged a Pull Request See #13763 A search suggests this PR might apply the STATIC macro to some C code. If it Although this is an automated message, feel free to @-reply to me directly if |
Background: the aim is to make events in MicroPython more generic and efficient, rather than having busy polling (based on POSIX poll) and being restricted to stream IO. The main use is for the
uasyncio
module, eg so that external interrupts like pin edges can wake the asyncio scheduler, but the event sub-system should not be tied to uasyncio.Already there is #6056, #6106 and #6110. This PR takes the proof-of-concept PR #6110 and extracts out just the "uevent" module and implements it for the stm32 and unix ports, to start with. At the moment this module is quite similar to the existing "uselect" module, at least on the surface. But the idea is to make "uevent" as efficient as possible on bare-metal (ie O(1) for all operations) and support arbitrary events, and the existing "uselect" API is too restrictive to achieve this goal.
What's done in this PR:
There's no real functional change here, the aim is to switch to uevent in a seamless way and then gradually improve it.
For now the uevent module should be considered "private" because its interface may change.
Any ideas/comments on improvements or alternatives are welcome.