pycurl

Multi-request pycurl running forever ( Infinite loop)

别来无恙 提交于 2019-12-05 04:39:28
问题 I want to perform Multi-request using Pycurl. Code is: m.add_handle(handle) requests.append((handle, response)) # Perform multi-request. SELECT_TIMEOUT = 1.0 num_handles = len(requests) while num_handles: ret = m.select(SELECT_TIMEOUT) if ret == -1: continue while 1: ret, num_handles = m.perform() print "In while loop of multicurl" if ret != pycurl.E_CALL_MULTI_PERFORM: break Thing is, this loop takes forever to run. Its not terminating. Can any one tell me, what it does and what are the

Problem trying to install PyCurl on Mac Snow Leopard

巧了我就是萌 提交于 2019-12-04 22:54:00
问题 My app needs to use PyCurl, so I tried to install it on my Mac but I found a lot of problems and errors. Requirement: First of all I have to say that the version of Python working on my Mac is 32 bit based, because I need to use WxPython, that needs 32 bit Python. For doing this I used: defaults write com.apple.versioner.python Prefer-32-Bit -bool yes To install PyCurl I used: sudo env ARCHFLAGS="-arch x86_64" easy_install setuptools pycurl And the terminal returned: Best match: setuptools 0

How to grab streaming data from twitter connect with pycurl using nltk - regular expression

给你一囗甜甜゛ 提交于 2019-12-04 18:05:06
I am newbie in Python and given a task from my boss to do this : Grab streaming data from twitter connect with pycurl and output in JSON Parsing using NLTK and Regular Expression Save it to database file(mySQL) or file base(txt) Note : this is the url that i want to grab ('http://search.twitter.com/search.json?geocode=-0.789275%2C113.921327%2C1.0km&q=+near%3Aindonesia+within%3A1km&result_type=recent&rpp=10') Is there anyone know how to grab a streaming data from twitter using the step above ? Your help would be very grateful :) I would look at pattern : it's a very nice web mining library, and

How to install libcurl with nss backend in aws ec2? (Python 3.6 64bit Amazon Linux)

巧了我就是萌 提交于 2019-12-04 13:11:39
I have an ec2 instance in AWS running Python3.6 (Amazon Linux/2.8.3) where I need to install pycurl with NSS ssl backend . First I tried it by adding pycurl==7.43.0 --global-option="--with-nss" to my requirement.txt file but I was getting errors installation errors. So I ended up doing it by adding a .config file in .ebextensions (that runs during deployment): container_commands: 09_pycurl_reinstall: # the upgrade option is because it will run after PIP installs the requirements.txt file. # and it needs to be done with the virtual-env activated command: 'source /opt/python/run/venv/bin

get many pages with pycurl?

℡╲_俬逩灬. 提交于 2019-12-04 12:23:12
I want to get many pages from a website, like curl "http://farmsubsidy.org/DE/browse?page=[0000-3603]" -o "de.#1" but get the pages' data in python, not disk files. Can someone please post pycurl code to do this, or fast urllib2 (not one-at-a-time) if that's possible, or else say "forget it, curl is faster and more robust" ? Thanks here is a solution based on urllib2 and threads. import urllib2 from threading import Thread BASE_URL = 'http://farmsubsidy.org/DE/browse?page=' NUM_RANGE = range(0000, 3603) THREADS = 2 def main(): for nums in split_seq(NUM_RANGE, THREADS): t = Spider(BASE_URL,

Error installing PyCurl

孤街浪徒 提交于 2019-12-04 12:15:54
问题 I tried installing pycurl via pip. it didn't work and instead it gives me this error. running install running build running build_py running build_ext building 'pycurl' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -DHAVE_CURL_SSL=1 -I/System/Library/Frameworks/ Python.framework/Versions/2.6/include/python2.6 -c src/pycurl.c -o build/temp.macosx-10.6-universal-2.6/src/pycurl

Which is best in Python: urllib2, PycURL or mechanize?

萝らか妹 提交于 2019-12-04 07:38:44
问题 Ok so I need to download some web pages using Python and did a quick investigation of my options. Included with Python: urllib - seems to me that I should use urllib2 instead. urllib has no cookie support, HTTP/FTP/local files only (no SSL) urllib2 - complete HTTP/FTP client, supports most needed things like cookies, does not support all HTTP verbs (only GET and POST, no TRACE, etc.) Full featured: mechanize - can use/save Firefox/IE cookies, take actions like follow second link, actively

pycurl equivalent of “curl --data-binary”

て烟熏妆下的殇ゞ 提交于 2019-12-04 05:09:48
I'd like to know the equivalent of this curl command in pycurl: curl --data-binary @binary_data_file.bin 'http://server/myapp/method' Note: The above curl statement uses the POST method. I need to use this for compatibility with my server script. Jon Clements The requests library is meant to keep things like this simple: import requests r = requests.post('http://server/myapp/method', data={'aaa': 'bbb'}) Or depending on how the receiving end expects data: import requests r = requests.post('http://server/myapp/method', data=file('binary_data_file.bin','rb').read()) From libcurl, setopt(...) try

Create missing directories in ftplib storbinary

北战南征 提交于 2019-12-04 01:50:42
I was using pycurl to transfer files over ftp in python. I could create the missing directories automatically on my remote server using: c.setopt(pycurl.FTP_CREATE_MISSING_DIRS, 1) for some reasons, I have to switch to ftplib. But I don't know how to to the same here. Is there any option to add to storbinary function to do that? or I have to create the directories manually? FTP_CREATE_MISSING_DIRS is a curl operation ( added here ). I'd hazard a guess that you have to do it manually with ftplib, but I'd love to be proven wrong, anyone? I'd do something like the following: (untested, and need

HandShake Failure in python(_ssl.c:590)

久未见 提交于 2019-12-03 20:41:31
问题 When I execute the below line, req = urllib2.Request(requestwithtoken) self.response = urllib2.urlopen(req,self.request).read() I am getting the following exception: SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:590) The thing is I am able to get the token by pinging the service by using curl . During the process of retrieving the token, all the certificates were verified. In turn, by using the generated token, i am not able to connect to the service. I