-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Possible race condition in throttling #5181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Okay, would consider fixes for this on their own merit, but I'm not going to consider it problematic otherwise, given the transient nature of throttling. |
At the moment I couldn't arrive at a quick fix. However, I feel that this might be a good place for people to discuss and propose possible solutions. Won't that be facilitated by keeping it open ? |
We'll redirect people willing to work on it here. |
@bitcity you're probably going to have to fix this by writing your own throttle using a cache backend that allows you to do atomic list commands (push/append), something like Redis would probably work. |
@jturmel Thanks for the tip. I ended up writing my custom throttle class that takes care of race condition (I use the atomic |
we've seen this problem during the last 3 days, after inspecting quickly the throttling code I realized the same things already discussed and got to the root and I'll probably going to implement my own throttling usig redis distributed lock. Now one question, I'd like to contribute to this but I'm not sure there is a "common" way of solving the problem saying we are using the cache abstraction layer and every backend could implement specific method to get around this issue. Maybe an additional call to the cache before inserting the data could help mitigate this a bit but doesn't sound like a proper fix. Does anybody have some ideas on which would be the best ways to fix this? |
@tomchristie this is ridiculous. |
@dmitry-mukhin there's no point in being aggressive. |
@xordoquy Community member @bitcity did a great job of filing and describing the issue. If the issue (that is real by the way) stays open, somebody in the community can just take it from here and prepare a PR.
I'm sorry if I come out aggressive. I try to point out that this will decrease community contributions and/or waste other people's time on doing double work. |
I've added the "help wanted" label to this issue. This is useful for tracking issues that are currently out of scope for maintainers, but in which contributions would be welcome. |
@rpkilby I believe that labels on closed issues are ignored by vast majority of github users |
Agreed, we should probably add a section to the contributing docs about the label. |
We already have about 100 opened issues. Anyone interested in contributing already has the choice. |
Honestly, don’t especially consider it an outstanding, addressable bug. Application level throttling will only get you so far, and getting throttle rates that are slightly off is just a constraint of the system. Fuzinesses in distributed systems can be okay in some cases. We’ll point at third party packages if anyone wants to implement something stricter. |
Partially agree with that, but slightly off really depends on the network which is by nature, unreliable. Nothing can avoid the first request of a series to get in, being the latest to update the cache in the actual scenario because of the network itself. I agree tho that a proper fix is really beyond the scope of the framework but we can add one line to the docs describing the limit of the standard implementation. |
Refs #5181 Co-authored-by: Adam Johnson <me@adamj.eu>
Refs encode#5181 Co-authored-by: Adam Johnson <me@adamj.eu>
The
throttle_success
method ofSimpleRateThrottle
updates the cache with a new value of history. If there are a number of concurrent requests, there may be a race condition where certain values of history would be overwritten by stale data.Checklist
master
branch of Django REST framework.Steps to reproduce
LocMemCache
andmemcached
500/hour
ab
, e.g.Setup 1 :
ab -l -c 20 -n 500 http://127.0.0.1:8000
Setup 2 :
ab -l -c 1 -n 500 http://127.0.0.1:8000
Expected behavior
ab
(once in batches of 20 & second time as a single request).Actual behavior
It appears as though some concurrent requests are being ignored by the cache.
The text was updated successfully, but these errors were encountered: