Skip to content

100% CPU and hangs after calling .content on a requests Response object #740

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
callumapplsys opened this issue May 18, 2018 · 2 comments

Comments

@callumapplsys
Copy link

If I do this, bpython3 starts using 100% of my CPU and hangs completely until I kill it. If I run it from the python REPL, it works.

callum@destroyer2[15:59:26] ~
 $ bpython3
bpython version 0.17 on top of Python 3.6.3 /usr/bin/python3
>>> import requests
>>> r = requests.get('http://google.com')
>>> r.content

Ubuntu 17.10, bpython 0.17, Python 3.6.3, requests 2.18.4

It might be something to do with requests.get only requesting the body after calling .content, otherwise it just gets the headers.

@vbawa
Copy link

vbawa commented Aug 13, 2018

I believe that this is just a manifestation of #703, which doesn't seem to be fixed for me (or you).

I did a little testing locally just now (Ubuntu 17.10, bpython 0.17, Python 3.6.3), and I can reliably reproduce the issue you describe when r.content is above a certain threshold. For google.com, the output string is ~10kB, but when I tried it with a response content of ~100 chars, there was no problem.

@vbawa
Copy link

vbawa commented Aug 13, 2018

I should note that, during the printing of long strings, I also see the CPU usage spike you describe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants