-
Notifications
You must be signed in to change notification settings - Fork 152
retry_after should be an int #52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 |
…n int Fix for issue #52. The retry-after header (when provided) is a string and should be cast to an integer for obvious reasons. 😄
@fdemello and @doyston , thanks for reporting this issue! I have fixed the bug (a regression bug Please update your package installations to the latest, and let me know if you have any issues. $ pip install --upgrade ciscosparkapi |
Thanks @cmlccie. I updated the package and I can see it handling the rate-limiting... to a certain extent. The script is still ending prematurely as a result of a 429 response. I can see the script is now waiting the retry-after timer as as specified in the response from the Spark API. But eventually the script crashes. It looks like it's straight after it was waiting from a previous 429 response. |
@doyston that is excellent information! That is a case that I didn't expect, and it is unintentionally causing the exception to be re-raised instead of being handled: try:
# Check the response code for error conditions
check_response_code(response, erc)
except SparkRateLimitError as e:
# Catch rate-limit errors
# Wait and retry if automatic rate-limit handling is enabled
if self.wait_on_rate_limit and e.retry_after:
warnings.warn(SparkRateLimitWarning(response))
time.sleep(e.retry_after)
continue
else:
# Re-raise the SparkRateLimitError
raise Zero (0) of course evaluates as -Thank you for the great reporting!! |
Fix bug number 2 associated with issue #52 : Ensure that the SparkRateLimitError.retry_after attribute is always a non-negative int, and then (in the automated rate-limit handling code in restsession.py) don't test for the validitiy of the retry_after attribute - make sure it is good and then use it.
@doyston , I have pushed v0.9.1, which should hopefully squash this bug. Check it out and let me know if you have any further issues. 🙂 Thanks again for the useful info! |
As Nick Mueller point out in the Python Spark Devs room, there was at least one more interesting case out in the world of an API returning a The consensus is that the To account for the expectation that some amount of wait time was expected before the request would be retried, I'm going to push another small update changing any |
Uh oh!
There was an error while loading. Please reload this page.
Traceback (most recent call last):
for member in api.memberships.list(roomId=room.id):
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/api/memberships.py", line 185, in list
for item in items:
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 390, in get_items
for json_page in pages:
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 345, in get_pages
response = self.request('GET', url, erc, params=params, **kwargs)
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 287, in request
time.sleep(e.retry_after)
TypeError: an integer is required (got type str)
The text was updated successfully, but these errors were encountered: