urllib3

Proxy connection with Python

早过忘川 提交于 2019-11-30 22:32:30
I have been attempting to connect to URLs from python. I have tried: urllib2, urlib3, and requests. It is the same issue that i run up against in all cases. Once I get the answer I imagine all three of them would work fine. The issue is connecting via proxy. I have entered our proxy information but am not getting any joy. I am getting 407 codes and error messages like: HTTP Error 407: Proxy Authentication Required ( Forefront TMG requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) However, I can connect using another of other applications that go through

Ignore certificate validation with urllib3

巧了我就是萌 提交于 2019-11-30 22:10:12
I'm using urllib3 against private services that have self signed certificates. Is there any way to have urllib3 ignore the certificate errors and make the request anyways? import urllib3 c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001) c.request('GET', '/') When using the following: import urllib3 c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE') c.request('GET', '/') The following error is raised: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/urllib3/request.py", line 67, in request **urlopen

How often does python-requests perform dns queries

烂漫一生 提交于 2019-11-30 17:20:22
问题 We are using Locust to for load testing rest api services behind elastic load balancing. I came across this article regarding load balancing and auto scaling, which is something we are testing. Locust is using python-requests which is using urllib3 , so my question is if python-requests does a dns query for every connect, and if not, is it configurable? 回答1: Locust is using python requests that is using urllib3 that is using socket.getaddrinfo which has DNS caching disabled according to this

Ignore certificate validation with urllib3

寵の児 提交于 2019-11-30 05:21:33
问题 I'm using urllib3 against private services that have self signed certificates. Is there any way to have urllib3 ignore the certificate errors and make the request anyways? import urllib3 c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001) c.request('GET', '/') When using the following: import urllib3 c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE') c.request('GET', '/') The following error is raised: Traceback (most recent call last): File "<stdin>", line 1

Python (pip) - RequestsDependencyWarning: urllib3 (1.9.1) or chardet (2.3.0) doesn't match a supported version

↘锁芯ラ 提交于 2019-11-30 00:14:54
问题 I found several pages about this issue but none of them solved my problem. Even if I do a : pip show I get : /usr/local/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.9.1) or chardet (2.3.0) doesn't match a supported version! RequestsDependencyWarning) Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.5.6', 'console_scripts', 'pip')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init

TypeError: urlopen() got multiple values for keyword argument 'body' while executing tests through Selenium and Python on Kubuntu 14.04

[亡魂溺海] 提交于 2019-11-29 13:45:45
im trying to run a selenium in python on Kubuntu 14.04. I get this error message trying with chromedriver or geckodriver, both same error. Traceback (most recent call last): File "vse.py", line 15, in <module> driver = webdriver.Chrome(chrome_options=options, executable_path=r'/root/Desktop/chromedriver') File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/chrome/webdriver.py", line 75, in __init__ desired_capabilities=desired_capabilities) File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/remote/webdriver.py", line 156, in __init__ self.start_session(capabilities,

urllib3 on python 2.7 SNI error on Google App Engine

别说谁变了你拦得住时间么 提交于 2019-11-29 07:39:57
I'm trying to download an HTTPS page from my site hosted on Google App Engine with SNI. No matter what library I use, I get the following error: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol I've tried solving the error in many ways, including using the urllib3 openssl monkeypatch: from urllib3.contrib import pyopenssl pyopenssl.inject_into_urllib3 But I always get the same error mentioned above. Any ideas? Unfortunately for urllib3, the Python standard library did not add SNI support until Python 3.2. (See Issue #118 @ urllib3 ) To use SNI in Python 2.7 with urllib3, you'll need

Python's requests “Missing dependencies for SOCKS support” when using SOCKS5 from Terminal

岁酱吖の 提交于 2019-11-28 17:14:27
I'm trying to interact with an API from my Python 2.7 shell using a package that relies on Python's requests. Thing is the remote address is blocked by my network (university library). So to speak to the API I do the following: ~$ ssh -D 8080 name@myserver.com And then, in new terminal, in local computer: ~$ export http_proxy=socks5://127.0.0.1:8080 https_proxy=socks5://127.0.0.1:8080 Then I run the program in Python console but fails: ~$ python >>> import myscript >>> id = '1213' >>> token = 'jd87jd9' >>> connect(id,token) File "/home/username/.virtualenvs/venv/local/lib/python2.7/site

Using the Requests python library in Google App Engine

烂漫一生 提交于 2019-11-28 05:23:00
I'm trying to use the awesome Requests library on Google App Engine. I found a patch for urllib3, which requests relies on, that is compatible with App Engine. https://github.com/shazow/urllib3/issues/61 I can successfully import requests but then response = requests.get('someurl') fails with the following traceback. What's going on? Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/admin/__init__.py", line 317, in post exec(compiled_code, globals()) File

Python urllib3 and how to handle cookie support?

非 Y 不嫁゛ 提交于 2019-11-28 01:05:18
So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url(