Skip to content

py/objgenerator: do not allow pend_throw to an unstarted generator. #5288

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

dxxb
Copy link
Contributor

@dxxb dxxb commented Nov 4, 2019

pend_throw() to unstarted generators prevents catching the injected
exception by the generator's code breaking existing code that relies on
being able to catch all exceptions. See #5242 (comment)

pend_throw() to unstarted generators prevents catching the injected
exception by the generator code.
@kevinkk525
Copy link
Contributor

kevinkk525 commented Nov 4, 2019

Please provide a code example of how you were able to catch all exceptions before the changes in #5242 because I couldn't think of any.

But judging from your comments, your changes are incompatible to CPython behaviour and therefore shouldn't be implemented. It would make creating code more confusing.

@dxxb
Copy link
Contributor Author

dxxb commented Nov 4, 2019

@dpgeorge @jimmo , I looked into restoring the original behaviour wrt. exceptions (i.e. always raise exceptions on yield statements) while letting pend_throw() accept exceptions for non-started generators and all simple solutions (limited to changing mp_obj_gen_resume()) involve looping over mp_execute_bytecode() twice for not-started generator case: once to move the IP to to the yield and the next to raise the pending exception. Would such solution be acceptable? Does anyone have a better idea?

Right now I am favouring leaving mp_obj_gen_resume() as is, forbid pend_throw() on not-started generators (as per this PR and pre-#5275 behaviour) and handle the case of not-started generators in uasyncio cancel() by scheduling the task so it advances to the first yield statement and then inject the exception.

@dpgeorge dpgeorge added the py-core Relates to py/ directory in source label Nov 5, 2019
@dpgeorge
Copy link
Member

dpgeorge commented Nov 5, 2019

So, #5275 did change behaviour of pend_throw() in a backwards incompatible way: previously it was kind of possible to detect if a generator had not started by the fact that pend_throw() would raise an exception on it. Now it's not possible to detect such a state of a generator.

But as I see it pend_throw() is an internal implementation detail and should not be used or relied upon by end users. It's not in CPython, and not documented anywhere, so it's specification is what the code does and is subject to change. Similarly eg with utimeq and lots of other internal details that are "accidentally" exposed to the user. It's too difficult to require that all features/functions/methods/etc that may be accessed by the user (even if they are not intended to be accessed by the user) are set in stone once they appear.

If anything, what could be added is a method which lets one test if a generator is already started or not. That would be more powerful than before #5275

@dxxb
Copy link
Contributor Author

dxxb commented Nov 5, 2019

So, #5275 did change behaviour of pend_throw() in a backwards incompatible way

Thank you for the reply @dpgeorge. Given we do not have a better solution for this issue yet, can 5578182 please be reverted (or this PR applied if you think it is ok) to restore functionality to existing code first and then (hopefully soon) a better solution can be found and applied?

About the issue which resulted in the creation of #5275:

  1. The mismatch in cancel()'s behaviour reported by @kevinkk525 between CPython asyncio and uasyncio is actually a problem in CPython (you can verify how CPython doesn't implement its own documented behaviour).
  2. Current CPython implementation behaviour as reported by @kevinkk525 can be reproduced without py/objgenerator.c: Allow pend_throw to an unstarted generator. #5275 by using a combination of throw() and pend_throw() in pure python.
  3. See below for my reasoning as to why the behaviour of current CPython implementation is not desirable (aka inferior to what we had before py/objgenerator.c: Allow pend_throw to an unstarted generator. #5275).

Now it's not possible to detect such a state of a generator.

I see that as a lesser issue with #5275 IMO. I think the loss of precise and unconditional exception handling within generators is a lot worse: besides breaking code that relies on being able to catch all exceptions injected by pend_throw(), it also prevents writing simpler more compact code with generators because now code using pend_throw() needs to know how the specific generator being affected is supposed to react to exceptions while before that knowledge was encoded in the generator itself.

# Different generators do different things in response to different exceptions and their internal state
def gen1():
  try:
    while True:
      baz = yield
  except CanceledError:
    do_gen1_A()
  except SomeOtherError1:
    do_gen1_B()
  finally:
    do_gen1_C()

def gen2():
  try:
    while True:
      baz = yield
  except CanceledError:
    do_gen2_A()
  except SomeOtherError1:
    do_gen2_B()
  finally:
    do_gen2_C()

# Before we could write something generic to inject exceptions

def signal_gen(exc):
  gs = [gen1(), gen2()]
  for g in gs:
    try:
      g.pend_throw(exc)
    except TypeError:
      next(g)
      g.pend_throw(exc)
    try:
      next(g)
    except Exception as e:
      # check and do something based on e
    else:
      # Exception swallowed by the generator, do something else

# Now a function coupled with each generator has to be written.
def signal_gen1(g, exc):
    try:
      g.pend_throw(exc)
      next(g)
    except CanceledError:
      do_gen1_A()
    except SomeOtherError1:
      do_gen1_B()
    finally:
      do_gen1_C()

def signal_gen2(g, exc):
    try:
      g.pend_throw(exc)
      next(g)
    except CanceledError:
      do_gen2_A()
    except SomeOtherError1:
      do_gen2_B()
    finally:
      do_gen2_C()
# Additionally when the signaling functions below get an exception
# they have no way to know if the generator already handled the
# exception or not.
# Adding a function to retrieve the state of the generator would allow
# the signaling function to decide if the generator had a chance to process
# the exception or not.

Before with just pend_throw() it was possible (not kind of) to know if a generator was running and we also had the guarantee that all exceptions injected with pend_throw() would be handled by the generator. FYI the latter behaviour is required to implement CPython's description of cancel() in the official asyncio documentation and, in general, it is a requirement for being able to write well behaved generators when exceptions are involved.

Now instead, pend_throw() sometimes behaves like throw() and sometimes it does not, so we need an extra test for the generator state to get close to what we had before, but even so the new semantics does not make the same guarantees as the previous single pend_throw() API.

I do not see how the current behaviour can be more powerful than the previous one when it needs two functions to get close to doing the job of the previous one and is still not capable to express what was covered by the previous single pend_throw().

But as I see it pend_throw() is an internal implementation detail and should not be used or relied upon by end users.

I cannot say I agree for the following reasons:

  1. The pedantic and less important one: pend_throw() is a public python function i.e. no leading underscore.
  2. More importantly: the new behaviour of pend_throw() (even with the proposed generator state flag test function) forces people to write more (and worse) code to achieve something close but not quite the same to what we had before. As I mentioned above, the guarantee of precise and unconditional in-generator exception handling previously made by pend_throw() is a requirement for writing correct (AKA not buggy) async tasks (among other things).

If anything, what could be added is a method which lets one test if a generator is already started or not. That would be more powerful than before #5275

Please see my comment above about why I think current semantics + started gen test is inferior to pre #5275 semantics.

@kevinkk525
Copy link
Contributor

How would you write that code in CPython?

@dxxb
Copy link
Contributor Author

dxxb commented Nov 5, 2019

How would you write that code in CPython?

CPython has only .throw() and .throw() is roughly equivalent to .pend_throw()+next() i.e. it runs the coroutine with the injected exception immediately including raising the exception at the start of the generator if it has not started yet.

So something like this would be ~ equivalent to the code above with post #5275 behaviour:

def signal_gen1(g, exc):
    try:
      g.throw(exc)
    except CanceledError:
      do_gen1_A()
    except SomeOtherError1:
      do_gen1_B()
    finally:
      do_gen1_C()

def signal_gen2(g, exc):
    try:
      g.throw(exc)
    except CanceledError:
      do_gen2_A()
    except SomeOtherError1:
      do_gen2_B()
    finally:
      do_gen2_C()

Note that .throw() does not work with uasyncio because the latter intentionally uses plain generators as coroutines and a different event loop design. One of the reasons for .pend_throw() to exist is that it separates setting a pending exception, which can happen in a coro or code outside the event loop, from running the coro in the event loop.

This micropython/micropython-lib#215 (comment) makes for an interesting reading. It may also be interesting to @dpgeorge. Warning: stream of consciousness within. The content of that comment (or the whole thread) doesn't provide a full picture or reference to what uasyncio looks like today and why.

@kevinkk525
Copy link
Contributor

Interesting, thank you for providing the code examples. I now understand your case a lot better.

You were going on about uasyncio so much when you are actually using bare generators and not the uasyncio API. (Maybe you have it mixed in your code of course but the issue you have seems to be solely with using bare generators because even "try: asyncio.cancel(coro) except TypeError: next(coro) ... " is actually a low level workaround and not uasyncio API, therefore the change to pend_throw didn't break uasyncio behaviour at all).

FYI the latter behaviour is required to implement CPython's description of cancel() in the official asyncio documentation and, in general, it is a requirement for being able to write well behaved generators when exceptions are involved.

I still don't see why micropython should follow CPython's description instead of the actual behaviour that is not raised as an issue. People expect micropython to behave the same.
Of course it doesn't matter if it does it by using the throw() workaround in asyncio.cancel() or by changing how pend_throw() works.

@dxxb
Copy link
Contributor Author

dxxb commented Nov 5, 2019

You were going on about uasyncio so much when you are actually using bare generators and not the uasyncio API.

Negative, uasyncio uses bare generators as coroutines and is designed so that only tasks that have started (and are suspended or done) can be canceled. I am merely showing that the recent change to .pend_throw() breaks one of the underlying assumption/mechanism uasyncio is build upon.

I realise there is really no documentation about uasyncio but if you wrote your tasks assuming they can be cancelled before reaching yield then they need to be changed or you need to kill them synchronously using .throw() in your own implementation of cancel() (and risk breaking uasyncio's event loop). There is really no way around because uasyncio's event loop simply does not work they way you want it to. The link I posted in my last comment should provide at least some context, if you read it.

because even "try: asyncio.cancel(coro) except TypeError: next(coro) ... " is actually a low level workaround and not uasyncio API, therefore the change to pend_throw didn't break uasyncio behaviour at all

uh? Does not compute 🤷‍♀ The workaround you mention is not a workaround at all and the consequent in your statement (the part after the therefore) does't follow from the antecedent (the part before the therefore).

The bottom line of my dive into uasyncio is that what you wanted to do in #5242, that is cancel() a not-started generator without waiting for execution to reach a supension point, is just not compatible with uasyncio's design. Both the pure-python workaround I provided and #5275 violate uasyncio's assumption that execution in generators must reach the first suspension point to be cancelled and that the cancellation exception must trigger when the event loop runs the coroutine (that's why .pend_throw() is used).

@dpgeorge
Copy link
Member

dpgeorge commented Nov 6, 2019

I do not see how the current behaviour can be more powerful than the previous one when it needs two functions to get close to doing the job of the previous one and is still not capable to express what was covered by the previous single pend_throw().

Let's say there's a new generator method called is_running(). Then the old behaviour of pend_throw() is:

def old_pend_throw(g, v):
    if g.is_running():
        return g.pend_throw(v)
    else:
        raise TypeError('can't pend throw to just-started generator')

I don't see what other capabilities this is missing?

@dxxb
Copy link
Contributor Author

dxxb commented Nov 6, 2019

I don't see what other capabilities this is missing?

Does .is_running() return True when the generator has finished running? I believe old .pend_throw() checks if the generator has started not if it is between its first suspension and the last, so if g.is_running(): g.pend_throw(v) may not be equivalent to old .pend_throw().

Issues with .is_running() notwithstading I see where you are going with this, but let's compare the outcome of the two solutions:

Old .pend_throw() behaviour (revert 5578182 or keep 5578182 and apply this PR)
  1. Doesn't break existing code.
  2. Tests if a generator has started and raises an exception if it hasn't.
  3. .pend_throw() has a clear pend-exc-or-fail behaviour disjoint from .throw(). Doesn't require a test before use but may require wrapping in a try:except: block.
  4. No API changes. One existing function.
New .pend_throw() behaviour + .is_running()
  1. Breaks existing code.
  2. Can test if a generator is running (but should probably test if it has started instead).
  3. Mixes behaviour from .throw() and .pend_throw().
  4. Always requires has_started() check before use. There is no valid usecase for the new naked .pend_throw() in uasyncio because raising the exception at the start of a generator is not compatible with uasyncio's design. Old .pend_throw()'s behaviour was deliberate).
  5. Total of two functions: breaking changes to an existing one plus a new one.

Neither resolve or improves task cancellation in uasyncio and the latter requires changing all instances of .pend_throw() to if g.has_started(): return g.pend_throw(v).

Am I missing something? Is it a fair comparison?

My takeaway is that new behaviour breaks existing third party code and requires the addition of has_started() checks to in-tree code without fixing any issue while concurrently adding a new API function and making exiting .pend_throw() less straightforward. In other words I find the old behaviour clearly superior to the new proposed one.

#5275 + #5288 to restore old .pend_throw() behaviour should take us back to where we were before the hasty application of #5275 while keeping the refactoring and most size reduction from #5275 plus a lot more bytes saved by not having to prepend if g.has_started(): to every use of .pend_throw().

@jimmo
Copy link
Member

jimmo commented Nov 6, 2019

I think there are three points here (@dxxb please correct me if I'm not summarizing this correctly)

  • What does it mean to cancel a task in CPython, and how does that relate to uasyncio

...and we also had the guarantee that all exceptions injected with pend_throw() would be handled by the generator. FYI the latter behaviour is required to implement CPython's description of cancel() in the official asyncio documentation and, in general, it is a requirement for being able to write well behaved generators when exceptions are involved.

I think you're referring to https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.cancel which doesn't (to my interpretation) say anything about what happens if the generator hasn't started yet.

It seems reasonable that the (trivially verifiable) behavior of not starting the generator is compatible with this documentation. Why would you start a generator just so you immediately have to make it clean itself up?

... if you wrote your tasks assuming they can be cancelled before reaching yield ...

  • The current behavior of pend_throw is fundamental to the assumptions that uasyncio makes.

Both the pure-python workaround I provided and #5275 violate uasyncio's assumption that execution in generators must reach the first suspension point to be cancelled and that the cancellation exception must trigger when the event loop runs the coroutine (that's why .pend_throw() is used).

The second part is fairly straightforward (that if a cancellation exception occurs that it must happen in the event loop), but the assumption by uasyncio that the generator must always reach the first yield is quite subtle.

It's clear that an already started coroutine must be resumed (obviously -- not just for uasyncio's assumptions but for cleanup etc), and as per micropython/micropython-lib#215 it's even more complicated when you have IO-blocked tasks. But a not-started coroutine is not IO-blocked.

It'd be really useful if you could summarise why the new behavior of pend_throw breaks uasyncio's assumptions for not-started coroutines? I can see how it's a breaking change if you assumed that all queued tasks would always attempt to run, but I'm missing how this change fundamentally breaks uasyncio. But there's a lot of history here, between the three current threads and micropython/micropython-lib#215. Seeing as you're making this point I'm guessing you've seen something that I've missed. A concise summary would be really useful in figuring out how to proceed.

  • uasyncio is different to CPython in subtle and surprising ways.

I see you've already seen #4217 (comment)

@kevinkk525
Copy link
Contributor

Sorry for asking again:
I can't find a way in CPython to make a non-started generator catch the exception thrown at it. I'm not an expert on low-level stuff so maybe I missed an option for that but my basic testing with your code works the same in CPython and micropython:

The generator:

>>> def gen1():
...     try:
...         while True:
...             baz = yield
...     except Exception as e:
...         print("Caught",e)

CPython:

>>> g=gen1()
>>> next(g)
>>> g.throw(Exception("hi"))
Caught hi
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration
>>> gg=gen1()
>>> gg.throw(Exception("hi"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in gen1
Exception: hi

Exception on non started generator isn't caught by the generator.

Micropython:

>>> g=gen1()
>>> next(g)
>>> g.throw(Exception("hI"))
Caugh hI
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration:
>>> gg=gen1()
>>> gg.throw(Exception("hI"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in gen1
Exception: hI

It behaves exactly like CPython. You can do the same micropython code with pend_throw and it will still behave in exactly the same way, with the difference that all exceptions are only handled when calling next(g):

>>> g=gen1()
>>> next(g)
>>> g.pend_throw(Exception("Hi"))
>>> next(g)
Caugh Hi
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration:
>>> gg=gen1()
>>> gg.pend_throw(Exception("Hi"))
>>> next(gg)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in gen1
Exception: Hi

So the behaviour of micropython is now the same as CPython unless I'm missing something.

To me pend_throw seems to be implemented for uasyncio to handle the exceptions when the coro is scheduled instead of immediately to "mimic" the behaviour of CPython coroutines.
As I understand it, its purpose was not to provide a different approach to using generators in general was it?
Because I can't find a way in CPython confirming your wish to have generators always run until the first yield and always catch the Exceptions inside the generator (unless I'm missing something).

@dxxb
Copy link
Contributor Author

dxxb commented Nov 6, 2019

Hi @jimmo

What does it mean to cancel a task in CPython, and how does that relate to uasyncio

I do not think we need or should look at CPython to justify or forbid changes to the underlying mechanism used by uasyncio but I brought this up in response to people pointing at CPython as a reference to show that:

  1. IMO CPython's own docs about cancellation do not match CPython's own implementation.
  2. CPython's own docs happen to match the behaviour enforced by old .pend_throw() because IMO it is the right thing to do when writing tasks as generators.

Nonetheless, if you plan to work on asyncio for upy this should interest you because it will eventually become apparent that asyncio has issues with, but not limited to, cancellation that have prompted others async libraries to be created. I believe @pfalcon understood this and (IMO correctly) prioritised correctness over compatibility. Some related reading material:

https://github.com/python-trio/trio/blob/master/docs/source/design.rst
https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world
https://vorpus.org/blog/timeouts-and-cancellation-for-humans/#cancel-scopes-trio-s-human-friendly-solution-for-timeouts-and-cancellation
https://trio.readthedocs.io/en/stable/reference-core.html#checkpoints
https://trio.readthedocs.io/en/stable/reference-core.html#cancellation-and-timeouts

I think you're referring to https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.cancel which doesn't (to my interpretation) say anything about what happens if the generator hasn't started yet. It seems reasonable that the (trivially verifiable) behavior of not starting the generator is compatible with this documentation.

The complete relevant text:

cancel()

Request the Task to be cancelled.

This arranges for a CancelledError exception to be thrown into the wrapped coroutine on the next cycle of the event loop.

The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged.

This line:

This arranges for a CancelledError exception to be thrown into the wrapped coroutine on the next cycle of the event loop.

makes current CPython asyncio and new uasyncio's cancel() behaviour uncompliant because the exception isn't thrown on the next cycle of the event loop rather it is thrown immediately in the cancel() function when the generator has not started yet.

The paragraph starting with:

The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. [...]

makes current CPython asyncio and new uasyncio's cancel() behaviour uncompliant because the coroutine is not given a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. when the generator has not started yet.

In both cases our point of disagreement is summed up by your phrase which doesn't (to my interpretation) say anything about what happens if the generator hasn't started yet. However, my position is that if you find a statement in formal documentation that says when C(), then H() it always applies unless restrictions to the applicability of that statement are made and, as you say, in this case no mention of the internal state of the generator is made so I believe I can conclude that those statement always apply.

Those general statements in the docs makes sense for two reasons IMO:

  1. The peculiarities of a generator's internal running state are an implementation detail of generators in Python.
  2. The person writing the docs understands that a generator used as a task has to be given guarantees about its behaviour that are independent of hidden variables the task writer has no control of (internal state, implementation details) or interactions with concurrent tasks (like being cancelled).

Why would you start a generator just so you immediately have to make it clean itself up?

Phrasing the issue this way artificially limits the cases you are considering. To set the stage:

  1. In general an event loop's schedule function, say call_soon(), is a boundary. When a generator is passed to it, the scheduler takes ownership of the generator's life cycle and execution flow. The caller can no longer make assumptions on the generator's state upon return from the schedule function e.g. in general a scheduler can run the generator when it wants and could even do so immediately in call_soon() before returning to the caller.
  2. In general scheduling and cancellation of a tasks are carried out by other different tasks (or different functions running outside of the event loop). Decoupling is good (!) and in async apps events can trigger scheduling or canceling of tasks, and the tasks doing the cancellation may know nothing about the tasks being cancelled besides that they are tagged for (or listed for) cancellation under specific conditions.

So back to your question: once the tasks is scheduled it can be cancelled at any time. Sure, most of the times it won't be cancelled immediately, but it can and for correctness that case must be handled the same way as any other cancellation. The caller of cancel() may know nothing about the task it is cancelling, while the task is the unit of work being cancelled and should encode the knowledge of what to do when it is being cancelled. So the right way to deal with cancellation is to always give the task a chance to process cancellation to make all cancellations behave the same. Since the only place where injected exceptions can be raised and caught in a generator is on yield/await then that place must be reached in order to raise the exception.

The current behavior of pend_throw is fundamental to the assumptions that uasyncio makes

[...] the assumption by uasyncio that the generator must always reach the first yield is quite subtle. [...] It's clear that an already started coroutine must be resumed [...] it's even more complicated when you have IO-blocked tasks. But a not-started coroutine is not IO-blocked.

The old behaviour of .pend_throw() was deliberately designed to make all injected exceptions (including cancellations) be raised at what Trio calls Checkpoint i.e. a point in the execution flow where scheduling or cancellation can occur, thereby guaranteeing that all cancellations behave the same way (and can be caught by the generator). This is a desirable feature for the reasons I mentioned above (see also the material I linked).

[...] I can see how it's a breaking change if you assumed that all queued tasks would always attempt to run, but I'm missing how this change fundamentally breaks uasyncio. [...] A concise summary would be really useful in figuring out how to proceed.

As you say, given that uasyncio deliberately enforced reaching a Checkpoint before a cancellation was possible, code has been written making that assumption. So existing code relies on it and that should be enough to rule out making changes that allow exceptions to be injected into tasks before they are given a chance to run.

However, the behaviour enforced by old .pend_throw() is not a quirk in uasyncio, it is a desirable feature for async event loops and was deliberately added to uasyncio to make it more like Trio by ensuring exceptions and scheduling can only happen in explicitly defined places in generators and helps writing simpler and less buggy code.

Regardless of the fate of uasyncio I really don't want upy to lose old .pend_throw() behaviour because it is a simple primitive building block for enforcing better async app writing style. Please see #5288 (comment) to my reasons as to why old .pend_throw() behaviour, but not necessarily old .pend_throw() implementation, is superior to new .pend_throw() behaviour + .is_running().

Note that in event loop shutdown a form of hard cancelling may be required for non-started generator but we always had throw() for that.

I see you've already seen #4217 (comment)

So far I reacted to your last comment on that issue but I haven't even looked at what that issue is about (brain cycles: exhausted) but now I am afraid to look! 😆

@dxxb
Copy link
Contributor Author

dxxb commented Nov 10, 2019

I can't find a way in CPython to make a non-started generator catch the exception thrown at it.
[...]
So the behaviour of micropython is now the same as CPython unless I'm missing something

You cannot catch exceptions in non-started generators in CPython and CPython's asyncio wraps generators to turn them into tasks. .throw() already behaved the same in CPython and micropython. .throw() is all you need to implement CPython's asyncio as long as generators are wrapped.

As I understand it, its purpose was not to provide a different approach to using generators in general was it? Because I can't find a way in CPython confirming your wish to have generators always run until the first yield and always catch the Exceptions inside the generator (unless I'm missing something).

This is not my my wish: I have explained elsewhere what I believe its purpose to be and why.
.pend_throw() used to do two things:

  1. Postpone the exception on started generators.
  2. Deliberately forbid pending an exception on non-started generators as simple solution to the problem of postponing exceptions on non-started generators.

The first is explains why .pend_throw() exists: using .throw() would require wrapping each generator like Python's asyncio does.

The second cannot be explained away as a bug or an oversight on @pfalcon's part because @pfalcon's comments show it was deliberate and because it required extra code to implement compared to letting .pend_throw() postpone an exception on a non-started generator.

You mentioned feature 1 in your comment but you left out feature 2 which IMO shows how the intent was deliberately and precisely to only raise exceptions on yield/await.

@dxxb
Copy link
Contributor Author

dxxb commented Nov 11, 2019

@dpgeorge , @jimmo any comment about #5288 (comment) , #5288 (comment) and #5288 (comment) ?

@dpgeorge I am happy to create a PR on micropython-lib to copy existing uasyncio modules to uasched (or any other name you prefer) then volunteer to fix its outstanding issues and maintain it under the new namespace. I would just like to have a mechanisms equivalent to old .pend_throw() that is not worse than old .pend_throw() as per my #5288 (comment). Would that be acceptable?

@dpgeorge
Copy link
Member

The new version (v3) of uasyncio has been in use (and in production code) for some time now and has shown that cancellation is a very tricky thing to get right, and it works well in uasyncio v3. So pend_throw() (which acted as a simple generator/task cancellation method) will no longer be supported. As such this change is not needed.

@dpgeorge dpgeorge closed this Jul 13, 2021
kamtom480 pushed a commit to kamtom480/micropython that referenced this pull request Sep 3, 2021
…ixel_fix

Update Adafruit_CircuitPython_NeoPixel commit to 0f4661c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
py-core Relates to py/ directory in source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants