Skip to content

retry_after should be an int #52

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
fdemello opened this issue Mar 1, 2018 · 6 comments
Closed

retry_after should be an int #52

fdemello opened this issue Mar 1, 2018 · 6 comments
Assignees
Labels

Comments

@fdemello
Copy link

fdemello commented Mar 1, 2018

Traceback (most recent call last):
for member in api.memberships.list(roomId=room.id):
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/api/memberships.py", line 185, in list
for item in items:
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 390, in get_items
for json_page in pages:
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 345, in get_pages
response = self.request('GET', url, erc, params=params, **kwargs)
File "/usr/local/lib/python3.6/site-packages/ciscosparkapi/restsession.py", line 287, in request
time.sleep(e.retry_after)
TypeError: an integer is required (got type str)

@doyston
Copy link

doyston commented Mar 3, 2018

+1
My script is not handling the the 429 response just like the above.

cmlccie added a commit that referenced this issue Mar 4, 2018
…n int

Fix for issue #52.  The retry-after header (when provided) is a string and should be cast to an integer for obvious reasons.  😄
@cmlccie
Copy link
Collaborator

cmlccie commented Mar 4, 2018

@fdemello and @doyston , thanks for reporting this issue! I have fixed the bug (a regression bug ☹️) and improved the rate-limiting support in v0.9. Rate-limit responses now raise a Python warning to let you know what is going on, and I improved the test-suite to make sure that the rate-limit handling is tested in a more reliable way in the future. 😄🤞

Please update your package installations to the latest, and let me know if you have any issues.

$ pip install --upgrade ciscosparkapi

@doyston
Copy link

doyston commented Mar 5, 2018

Thanks @cmlccie. I updated the package and I can see it handling the rate-limiting... to a certain extent. The script is still ending prematurely as a result of a 429 response. I can see the script is now waiting the retry-after timer as as specified in the response from the Spark API.
Existing account matches. Setting licenses... C:\Program Files (x86)\Python36-32\lib\site-packages\ciscosparkapi\restsession.py:281: SparkRateLimitWarning: Rate-limit response received; the request will automatically be retried in 9 seconds. warnings.warn(SparkRateLimitWarning(response))

But eventually the script crashes. It looks like it's straight after it was waiting from a previous 429 response.
[Update] The script seems to crash when the retry-after time is zero (0).

@cmlccie
Copy link
Collaborator

cmlccie commented Mar 6, 2018

@doyston that is excellent information! That is a case that I didn't expect, and it is unintentionally causing the exception to be re-raised instead of being handled:

            try:
                # Check the response code for error conditions
                check_response_code(response, erc)
            except SparkRateLimitError as e:
                # Catch rate-limit errors
                # Wait and retry if automatic rate-limit handling is enabled
                if self.wait_on_rate_limit and e.retry_after:
                    warnings.warn(SparkRateLimitWarning(response))
                    time.sleep(e.retry_after)
                    continue
                else:
                    # Re-raise the SparkRateLimitError
                    raise

Zero (0) of course evaluates as False and so the if statement is evaluating as False and re-raising the error. This should be a quick and easy fix.

-Thank you for the great reporting!!

cmlccie added a commit that referenced this issue Mar 6, 2018
Fix bug number 2 associated with issue #52 :

Ensure that the SparkRateLimitError.retry_after attribute is always a non-negative int, and then (in the automated rate-limit handling code in restsession.py) don't test for the validitiy of the retry_after attribute - make sure it is good and then use it.
@cmlccie
Copy link
Collaborator

cmlccie commented Mar 6, 2018

@doyston , I have pushed v0.9.1, which should hopefully squash this bug. Check it out and let me know if you have any further issues. 🙂

Thanks again for the useful info!

@cmlccie
Copy link
Collaborator

cmlccie commented Mar 6, 2018

As Nick Mueller point out in the Python Spark Devs room, there was at least one more interesting case out in the world of an API returning a Retry-After of 0, which didn't make a lot of sense:

spotify/web-api#542

The consensus is that the 0 is being returned most probably due a millisecond rounding issue, which would mean that the Retry-After time should really fall somewhere between 0 < t < 1.

To account for the expectation that some amount of wait time was expected before the request would be retried, I'm going to push another small update changing any Retry-After responses of 0 to 1 second.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants