-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New Mechanism for Avoiding Flood Limits #3018
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@harshil21 and I just talked about this a bit. let me try to summarize the key results, so that others can agree or disagree with them :D
|
mjpieters/aiolimiter#73 a big discussion, haven't yet seen what parts could apply to us too.
found the inspect function which helps with that - getcoroutinelocals so its something like: >>> async def x(a, b, c):
... this = 2
... return this
...
>>> f = x(1, 2, 3)
>>> inspect.getcoroutinelocals(f)
{'a': 1, 'b': 2, 'c': 3} |
ah, nice 👍 it doesn't give you access to |
nope |
supercedes & closes #2139
Below I write down the different thoughts that I had on how we can introduce a new rate limiting mechanism. Pretty much everything is up for discussion. What do you want to have designed differently? What did I miss?
Feature/Mechanism-Wishlist
Integration into PTB
One of the major shortcomings of
MessageQueue
was that it was not properly integrated with the framework. IMO that's one of the main points that should be considered. I image that the setup be such that something likeor
suffices. That is,
ExtBot
would have a (protected?) attribute holding the rate limiter and all requests would automatically go through that.Note that I wrote
ExtBot
- this functionality should be purely fortelegram.ext
and not intelegram
.getUpdates
should not get passed through the rate limiter.Abstract Interface
I'd like to make this a pluggable interface similar to
BaseRequest
. that way we give users a chance to implement custom, elaborate logic on their own. Such an interface class will probably just have once method - maybeprocess_request
or so thatBot
will call.However, I'm somewhat unsure what this method should do/return. I see two options:
ExtBot
would probably do something likeawait self._rate_limiter.process_request(self._post(…))
and inprocess_update
one could do somethling likeasync with self._semaphore: await coroutine
procless_update
could return an async context manager, such thatExtBot
would doasync with self._rate_limiter.process_request(…): await self._post(…)
I guess the second one might be a bit harder for python-novices to implement. Then again, doing that is a rather advanced task anyway.
The first one may also allow for a bit more flexibility in the implementations. OTOH, the context manager approach separates the
Bot
from theRateLimiter
a bit more clearly, which I potentially like.Open question: Retries
One thing that one can think of is whether the rate limiter should be allowed/expected to retry a request in case a
RetryAfter
exception is raised. My gut feeling is that it shouldn't, as it violates the chain of command in the PTB framework. But I'm not completely sure.In case we want to allow for retries, we'd have to provide way for that to be possible, e.g. by passing a tuple
(method, kwargs)
such that the rate limiter can callmethod(**kwargs)
as often as needed.Per-Request Decisions/Priorities
IMO it's important to try and allow for requests to be processed differently depending on
answerInlineQuery
,sendMessage
, etc)This would allow to e.g. give answering inline queries a higher priority than sending messages and giving batch-notifications a lower priority than other messages (there even was a Discussions entry requesting something like this at some point, but I can't find it now)
So a signature of
process_request
could look likeHere IMO the
data
should not be theRequestData
that we pass toBaseRequest
but basically the plain parameters that are passed to the bot method (after insertingDefaultValues
.Naming
Maybe
BaseRateLimiter
?Possible built-in implementation
We could provide a basic implementation out of the box. For that we could use a 3rd party dependency like
aiolimiter
orpyrate-limiter
. Both are published under the MIT license, so that shouldn't be a problem.aiolimiter
has only one stable release but to me it looks bit bit cleaner, with a small, well defined feature set - though that may just be a first impression.I imagine that such an implementation could be relatively close to what
MessageQueue
formerly did, i.e. enforce a global delay and a per-group/chat delay. One could think of providing that in a priority-queue style fashion such that user can specify a priority though therate_limit_kwargs
.Implementation
The main task of calling
process_request
in the right spot should not be too difficult. The main issue that I see is to keep the logic away fromtg.Bot
. For that I have a few ideas:Part 1
For passing
endpoint
anddata
it should suffice to split uptg.Bot._post
a bit so thatExtBot
has a chance to link into the spot between insertingDefaultValues
and callingBot.request._post
.Part 2
For passing
rate_limit_kwargs
, that's a bit more tricky. We'd have to add a new parameter to all methods ofExtBot
, but we don't want to add them toBot
.One straightforward option would be to split up
Bot.send_message
intoBot._send_message
andBot.send_message
, where only the protected one has the additional parameter.Bot.send_message
would then just pass everything along to_send_message
andExtBot.send_message
would callBot._send_message
directly. This would result in many additional loc intg.Bot
. Additionally, this doesn't play well with shortcut methods.The other idea that is to piggyback on
api_kwargs
: We add a parameterrate_limit_kwargs
toExtBot.send_message
, but callsuper().send_message(…, api_kwargs=api_kwargs |= {secret_key: rate_limit_kwargs)
. Then before passing everything to the rate limiter, we extract them asrate_limit_kwargs = api_kwargs.pop(secret_key, {})
. This looks like a dirty hack at first glance, but the longer I think about it, the more I actually like it:api_kwargs
doesn't interfere with any of the other parameters, so it's rather safe to use for thatBot
andExtBot
apart without introducing endless boilerplaterate_limit_kwargs
- we can just throw an error if they are used with aBot
instead of anExtBot
The text was updated successfully, but these errors were encountered: