httplib

Why doesn't my variable value get passed to the finally block in python

China☆狼群 提交于 2020-01-25 04:30:11
问题 This is for python 2.7.10 Perhaps I'm not using the try..except..finally block correctly. I need to check on the HTTP response code I get from a webpage. If I get a 200 code, everything is working. If I get any other code, report what code. It works fine if I get a 200 HTTP code. If I get an exception, for some reason, it gives me an UnboundedLocalError, stating my variable isn't referenced. How do I get my variable to be recognized in the finally block? Here's my code: try: conn = httplib

Is it possible to loop over an httplib.HTTPResponse's data?

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-16 03:28:10
问题 I'm trying to develop a very simple proof-of-concept to retrieve and process data in a streaming manner. The server I'm requesting from will send data in chunks, which is good, but I'm having issues using httplib to iterate through the chunks. Here's what I'm trying: import httplib def getData(src): d = src.read(1024) while d and len(d) > 0: yield d d = src.read(1024) if __name__ == "__main__": con = httplib.HTTPSConnection('example.com', port='8443', cert_file='...', key_file='...') con

Url encode using python 2.7

孤人 提交于 2020-01-03 13:38:07
问题 >>> import httplib >>> x = httplib.HTTPConnection('localhost', 8080) >>> x.connect() >>> x.request('GET','/camera/store?fn=aaa&ts='+str.encode('2015-06-15T14:45:21.982600+00:00','ascii')+'&cam=ddd') >>> y=x.getresponse() >>> z=y.read() >>> z 'error: Invalid format: "2015-06-15T14:45:21.982600 00:00" is malformed at " 00:00"' And the system show me this error. As i want to encode this format to this: 2015-06-15T14%3A45%3A21.982600%2B00%3A00 来源: https://stackoverflow.com/questions/37541436/url

Python: httplib getresponse issues many recv() calls

前提是你 提交于 2020-01-01 11:46:46
问题 getresponse issues many recv calls while reading header of an HTML request. It actually issues recv for each byte which results in many system calls. How can it be optimized? I verified on an Ubuntu machine with strace dump. sample code: conn = httplib.HTTPConnection("www.python.org") conn.request("HEAD", "/index.html") r1 = conn.getresponse() strace dump: sendto(3, "HEAD /index.html HTTP/1.1\r\nHost:"..., 78, 0, NULL, 0) = 78 recvfrom(3, "H", 1, 0, NULL, NULL) = 1 recvfrom(3, "T", 1, 0, NULL

Python: httplib getresponse issues many recv() calls

非 Y 不嫁゛ 提交于 2020-01-01 11:46:34
问题 getresponse issues many recv calls while reading header of an HTML request. It actually issues recv for each byte which results in many system calls. How can it be optimized? I verified on an Ubuntu machine with strace dump. sample code: conn = httplib.HTTPConnection("www.python.org") conn.request("HEAD", "/index.html") r1 = conn.getresponse() strace dump: sendto(3, "HEAD /index.html HTTP/1.1\r\nHost:"..., 78, 0, NULL, 0) = 78 recvfrom(3, "H", 1, 0, NULL, NULL) = 1 recvfrom(3, "T", 1, 0, NULL

urllib,urllib2,urllib3和请求模块之间有什么区别?

て烟熏妆下的殇ゞ 提交于 2019-12-25 18:35:53
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 在Python中, urllib , urllib2 , urllib3 和 requests 模块之间有什么区别? 为什么有三个? 他们似乎在做同样的事情... #1楼 我知道已经有人说过,但我强烈建议您使用Python封装的 requests 。 如果您使用的不是python语言,那么您可能会认为 urllib 和 urllib2 易于使用,代码不多且功能强大,这就是我以前的想法。 但是 requests 包是如此有用,而且太短了,每个人都应该使用它。 首先,它支持完全宁静的API,并且非常简单: import requests resp = requests.get('http://www.mywebsite.com/user') resp = requests.post('http://www.mywebsite.com/user') resp = requests.put('http://www.mywebsite.com/user/put') resp = requests.delete('http://www.mywebsite.com/user/delete') 无论是GET / POST,您都无需再次对参数进行编码,只需将字典作为参数即可。 userdata = {"firstname":

I want to call HDFS REST api to upload a file

守給你的承諾、 提交于 2019-12-24 16:34:02
问题 I want to call HDFS REST api to upload a file using httplib . My program created the file, but no content is in it. ===================================================== Here is my code: import httplib conn=httplib.HTTPConnection("localhost:50070") conn.request("PUT","/webhdfs/v1/levi/4?op=CREATE") res=conn.getresponse() print res.status,res.reason conn.close() conn=httplib.HTTPConnection("localhost:50075") conn.connect() conn.putrequest("PUT","/webhdfs/v1/levi/4?op=CREATE&user.name=levi")

When I use httplib for my OAUTH in Python, I always get “CannotSendRequest” and then "

。_饼干妹妹 提交于 2019-12-24 08:19:39
问题 Traceback: File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/ea/ea/hell/life/views.py" in linkedin_auth 274. token = oauth_linkedin.get_unauthorised_request_token() File "/home/ea/ea/hell/life/oauth_linkedin.py" in get_unauthorised_request_token 52. resp = fetch_response(oauth_request, connection) File "/home/ea/ea/hell/life/oauth_linkedin.py" in fetch_response 42.

Python : Problems getting past the login page of an .aspx site

旧城冷巷雨未停 提交于 2019-12-23 18:24:30
问题 Problem: I have searched several websites/blogs/etc to find a solution but did not get to what I was looking for. The problem in short, is that I would like to scrape a site - but to get to that site - I have to get past the login page. What I did: I did manage to use urllib2 and httplib to open the page, but even after logging in (no errors being displayed) the redirection of the login page as shown in the browser does not happen. My code was not too different than what was displayed here:

python: httplib error: can not send headers

前提是你 提交于 2019-12-23 17:49:01
问题 conn = httplib.HTTPConnection('thesite') conn.request("GET","myurl") conn.putheader('Connection','Keep-Alive') #conn.putheader('User-Agent','Mozilla/5.0(Windows; u; windows NT 6.1;en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome//5.0.375.126 Safari//5.33.4') #conn.putheader('Accept-Encoding','gzip,deflate,sdch') #conn.putheader('Accept-Language','en-US,en;q=0.8') #conn.putheader('Accept-Charset','ISO-8859-1,utf-8;1=0.7,*;q=0.3') conn.endheaders() r1= conn.getresponse() It raises an error: