urllib2

Multiple urllib2 connections

对着背影说爱祢 提交于 2019-12-12 19:13:19
问题 I want to download multiple images at the same time. For that I'm using threads, each one downloading an image, using urllib2 module. My problem is that even if threads starts (almost) simultaneously, the images are downloaded one by one, like in a single-threaded environment. Here is the threaded function: def updateIcon(self, iter, imageurl): req = urllib2.Request('http://site.com/' + imageurl) response = urllib2.urlopen(req) imgdata = response.read() gobject.idle_add(self.setIcon, iter,

POST request via urllib/urllib2?

為{幸葍}努か 提交于 2019-12-12 19:11:39
问题 Before you say anything, I've looked around SO and the solutions didn't work. I need to make a post request to a login script in Python. The URL looks like http://example.com/index.php?act=login, then it accepts username and password via POST. Could anyone help me with this? I've tried: import urllib, urllib2, cookielib cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) opener.addheaders.append(('User-agent', 'Mozilla/4.0')) opener.addheaders.append( (

Why does text retrieved from pages sometimes look like gibberish?

你。 提交于 2019-12-12 17:42:05
问题 I'm using urllib and urllib2 in Python to open and read webpages but sometimes, the text I get is unreadable. For example, if I run this: import urllib text = urllib.urlopen('http://tagger.steve.museum/steve/object/141913').read() print text I get some unreadable text. I've read these posts: Gibberish from urlopen Does python urllib2 automatically uncompress gzip data fetched from webpage? but can't seem to find my answer. Thank you in advance for your help! UPDATE: I fixed the problem by

Trying to access the Internet using urllib2 in Python

时光总嘲笑我的痴心妄想 提交于 2019-12-12 17:23:39
问题 I'm trying to write a program that will (among other things) get text or source code from a predetermined website. I'm learning Python to do this, and most sources have told me to use urllib2 . Just as a test, I tried this code: import urllib2 response = urllib2.urlopen('http://www.python.org') html = response.read() Instead of acting in any expected way, the shell just sits there, like it's waiting for some input. There aren't even an " >>>" or " ... ". The only way to exit this state is

Twitter stream using OAuth in Python behaving differently on two equally configured machines

不想你离开。 提交于 2019-12-12 11:47:36
问题 I have the same piece of coding to deal with Twitter User Stream running on two different machines. Both machines are Ubuntu Lucid using python 2.6.5, but on the machine in my home I receive HTTP Error 401: Unauthorized while on the university it works perfectly. On both machines it works perfectly when I use curl with the same parameters, i.e., consumer key, consumer secret, acces token, and access key. See the code bellow, it was created by Josh Sharp from oauth.oauth import OAuthRequest,

python urllib2 document.login

久未见 提交于 2019-12-12 10:26:57
问题 How would you go about logging into a website that is set up like so, using python urllib2 The below is the javascript handler on the form and a onsubmit. How would I process this in python? <script> function handleLogin() {document.login.un.value = document.login.username.value;document.login.width.value = screen.width;document.login.height.value = screen.height;} </script> Below is the html form with all the components to send as post. What holds me up is the onsubmit function. <form id=

How to send utf-8 content in a urllib2 request?

空扰寡人 提交于 2019-12-12 09:24:16
问题 I'm struggling with the following question for the past half a day and although I've found some info about similar problems, nothing really hits the spot. I'm trying to send a PUT request using urllib2 with data that contains some Unicode characters: body = u'{ "bbb" : "asdf\xd7\xa9\xd7\x93\xd7\x92"}' conn = urllib2.Request(request_url, body, headers) conn.get_method = lambda: 'PUT' response = urllib2.urlopen(conn) I've tried to use body = body.encode('utf-8') and other variations, but

python urllib2 download size

南笙酒味 提交于 2019-12-12 09:13:26
问题 iwant to download a file with the urllib2, and meanwhile i want to display a progress bar.. but how can i get the actual downloaded filesize? my current code is ul = urllib2.urlopen('www.file.com/blafoo.iso') data = ul.get_data() or open('file.iso', 'w').write(ul.read()) The data is first written to the file, if the whole download is recieved from the website. how can i access the downloaded data size? Thanks for your help 回答1: Here's an example of a text progress bar using the awesome

Logging into quora using python

痞子三分冷 提交于 2019-12-12 09:09:48
问题 I tried logging into quora using python. But it gives me the following error. urllib2.HTTPError: HTTP Error 500: Internal Server Error This is my code till now. I also work behind a proxy. import urllib2 import urllib import re import cookielib class Quora: def __init__(self): '''Initialising and authentication''' auth = 'http://name:password@proxy:port' cj = cookielib.CookieJar() logindata = urllib.urlencode({'email' : 'email' , 'password' : 'password'}) handler = urllib2.ProxyHandler({'http

Tor doesn't work with urllib2

核能气质少年 提交于 2019-12-12 08:47:09
问题 I am trying to use tor for anonymous access through privoxy as a proxy using urllib2. System info: Ubuntu 14.04, recently upgraded from 13.10 through dist-upgrade. This is a piece of code I am using for test purposes: import urllib2 def req(url): proxy_support = urllib2.ProxyHandler({"http": "127.0.0.1:8118"}) opener = urllib2.build_opener(proxy_support) opener.addheaders = [('User-agent', 'Mozilla/5.0')] return opener.open(url).read() print req('https://check.torproject.org') The above