You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I do this, bpython3 starts using 100% of my CPU and hangs completely until I kill it. If I run it from the python REPL, it works.
callum@destroyer2[15:59:26] ~
$ bpython3
bpython version 0.17 on top of Python 3.6.3 /usr/bin/python3
>>> import requests
>>> r = requests.get('http://google.com')
>>> r.content
I believe that this is just a manifestation of #703, which doesn't seem to be fixed for me (or you).
I did a little testing locally just now (Ubuntu 17.10, bpython 0.17, Python 3.6.3), and I can reliably reproduce the issue you describe when r.content is above a certain threshold. For google.com, the output string is ~10kB, but when I tried it with a response content of ~100 chars, there was no problem.
If I do this, bpython3 starts using 100% of my CPU and hangs completely until I kill it. If I run it from the python REPL, it works.
Ubuntu 17.10, bpython 0.17, Python 3.6.3, requests 2.18.4
It might be something to do with
requests.get
only requesting the body after calling.content
, otherwise it just gets the headers.The text was updated successfully, but these errors were encountered: