python-requests

Cannot connect to proxy error on requests.get() or requests.post() in python

放肆的年华 提交于 2021-02-18 16:55:34
问题 I have two URLs to fetch data from. Using my code, the first URL is working, whereas the second URL is giving ProxyError . I am using requests library in Python 3 and tried searching the problem in Google and here, but with no success. My code snippet is: import requests proxies = { 'http': 'http://user:pass@xxx.xxx.xxx.xxx:xxxx', 'https': 'http://user:pass@xxx.xxx.xxx.xxx:xxxx', } url1 = 'https://en.oxforddictionaries.com/definition/act' url2 = 'https://dictionary.cambridge.org/dictionary

python oauthlib: in escape ValueError “Only unicode objects are escapable”

被刻印的时光 ゝ 提交于 2021-02-18 10:51:34
问题 I'm using python-social-auth to login with social networks from my Django application. On my local machine everything works fine, but when I deploy to a server I get the following error: oauthlib.oauth1.rfc5849.utils in escape ValueError: Only unicode objects are escapable. Got None of type <type 'NoneType'>. Stacktrace: File "django/core/handlers/base.py", line 112, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "social/apps/django_app/utils.py",

Fastest Proxy Iteration in Python

僤鯓⒐⒋嵵緔 提交于 2021-02-17 06:10:45
问题 Let's say I have a list that contains 10,000+ proxies proxy_list = ['ip:port','ip:port',.....10,000+ items] How do I iterate it to get the proxies that works for my pc? Using the following code it is possible to find it , but takes 5*10,000 seconds to get completed. How would I iterate through the list faster? import requests result=[] for I in proxy_list: try: requests.get('http:\\www.httpbin.org\ip',proxies = {'https' : I, 'http' : I } ,timeout = 5) result.append(I) except: pass 回答1: You

Fastest Proxy Iteration in Python

随声附和 提交于 2021-02-17 06:10:29
问题 Let's say I have a list that contains 10,000+ proxies proxy_list = ['ip:port','ip:port',.....10,000+ items] How do I iterate it to get the proxies that works for my pc? Using the following code it is possible to find it , but takes 5*10,000 seconds to get completed. How would I iterate through the list faster? import requests result=[] for I in proxy_list: try: requests.get('http:\\www.httpbin.org\ip',proxies = {'https' : I, 'http' : I } ,timeout = 5) result.append(I) except: pass 回答1: You

Fastest Proxy Iteration in Python

孤者浪人 提交于 2021-02-17 06:10:14
问题 Let's say I have a list that contains 10,000+ proxies proxy_list = ['ip:port','ip:port',.....10,000+ items] How do I iterate it to get the proxies that works for my pc? Using the following code it is possible to find it , but takes 5*10,000 seconds to get completed. How would I iterate through the list faster? import requests result=[] for I in proxy_list: try: requests.get('http:\\www.httpbin.org\ip',proxies = {'https' : I, 'http' : I } ,timeout = 5) result.append(I) except: pass 回答1: You

Extremely long response time using requests

旧巷老猫 提交于 2021-02-16 18:40:10
问题 Description I have an AWS ec2 instance (ubuntu 16) that runs a Python application. In which I call some Facebook Account Kit APIs and also Google Play Store APIs. They all work perfectly fine until I reboot the instance two weeks before. After the reboot, the requests take more than 5 mins to finish, which is totally not acceptable. I have to manually set the timeout to over 10mins in order to let the request to be finished. The problem only occurs on one of my servers, I run with the same

python requests json returns single quote

旧街凉风 提交于 2021-02-16 04:52:33
问题 i'm playing a little with google places api and requests I got : r = requests.get(self.url, params={'key': KEY, 'location': self.location, 'radius': self.radius, 'types': "airport"}, proxies=proxies) r returns a 200 code, fine, but I'm confused by what r.json() returns compared to r.content extract of r.json() : {u'html_attributions': [], u'next_page_token': u'CoQC-QAAABT4REkkX9NCxPWp0JcGK70kT4C-zM70b11btItnXiKLJKpr7l2GeiZeyL5y6NTDQA6ASDonIe5OcCrCsUXbK6W0Y09FqhP57ihFdQ7Bw1pGocLs

python requests json returns single quote

南笙酒味 提交于 2021-02-16 04:52:15
问题 i'm playing a little with google places api and requests I got : r = requests.get(self.url, params={'key': KEY, 'location': self.location, 'radius': self.radius, 'types': "airport"}, proxies=proxies) r returns a 200 code, fine, but I'm confused by what r.json() returns compared to r.content extract of r.json() : {u'html_attributions': [], u'next_page_token': u'CoQC-QAAABT4REkkX9NCxPWp0JcGK70kT4C-zM70b11btItnXiKLJKpr7l2GeiZeyL5y6NTDQA6ASDonIe5OcCrCsUXbK6W0Y09FqhP57ihFdQ7Bw1pGocLs

python requests json returns single quote

你。 提交于 2021-02-16 04:51:58
问题 i'm playing a little with google places api and requests I got : r = requests.get(self.url, params={'key': KEY, 'location': self.location, 'radius': self.radius, 'types': "airport"}, proxies=proxies) r returns a 200 code, fine, but I'm confused by what r.json() returns compared to r.content extract of r.json() : {u'html_attributions': [], u'next_page_token': u'CoQC-QAAABT4REkkX9NCxPWp0JcGK70kT4C-zM70b11btItnXiKLJKpr7l2GeiZeyL5y6NTDQA6ASDonIe5OcCrCsUXbK6W0Y09FqhP57ihFdQ7Bw1pGocLs

How to extract HTTP response body from a Python requests call?

强颜欢笑 提交于 2021-02-15 10:41:19
问题 I'm using the Python requests library. I'm trying to figure out how to extract the actual HTML body from a response. The code looks a bit like this: r = requests.get(...) print r.content This should indeed print lots of content, but instead prints nothing. Any suggestions? Maybe I've misunderstood how requests.get() works? 回答1: Your code is correct. I tested: r = requests.get("http://www.google.com") print(r.content) And it returned plenty of content. Check the url, try "http://www.google.com