Multiprocessing useless with urllib2?

落花浮王杯 提交于 2019-11-27 14:09:44

Take a look at a look at gevent and specifically at this example: concurrent_download.py. It will be reasonably faster than multiprocessing and multithreading + it can handle thousands of connections easily.

Ah here comes yet another discussion about the GIL. Well here's the thing. Fetching content with urllib2 is going to be mostly IO-bound. Native threading AND multiprocessing will both have the same performance when the task is IO-bound (threading only becomes a problem when it's CPU-bound). Yes you can speed it up, I've done it myself using python threads and something like 10 downloader threads.

Basically you use a producer-consumer model with one thread (or process) producing urls to download, and N threads (or processes) consuming from that queue and making requests to the server.

Here's some pseudo-code:

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())

Now if you're downloading very large chunks of data (hundreds of MB) and a single request completely saturates the bandwidth, then yes running multiple downloads is pointless. The reason you run multiple downloads (generally) is because requests are small and have a relatively high latency / overhead.

It depends! Are you contacting different servers, are the transferred files small or big, do you loose much of the time waiting for the server to reply or by transferring data,...

Generally, multiprocessing involves some overhead and as such you want to be sure that the speedup gained by parallelizing the work is larger than the overhead itself.

Another point: network and thus I/O bound applications work – and scale – better with asynchronous I/O and an event driven architecture instead of threading or multiprocessing, as in such applications much of the time is spent waiting on I/O and not doing any computation.

For your specific problem, I would try to implement a solution by using Twisted, gevent, Tornado or any other networking framework which does not use threads to parallelize connections.

What you do when you split web requests over several processes is to parallelize the network latencies (i.e. the waiting for responses). So you should normally get a good speedup, since most of the processes should sleep most of the time, waiting for an event.

Or use Twisted. ;)

jfs

Nothing is useful if your code is broken: f() (with parentheses) calls a function in Python immediately, you should pass just f (no parentheses) to be executed in the pool instead. Your code from the question:

#XXX BROKEN, DO NOT USE
result = [pool.apply_async(getTweets(i,)) for i in urls]
[i.get() for i in result]

notice parentheses after getTweets that means that all the code is executed in the main thread serially.

Delegate the call to the pool instead:

all_tweets = pool.map(getTweets, urls)

Also, you don't need separate processes here unless json.loads() is expensive (CPU-wise) in your case. You could use threads: replace multiprocessing.Pool with multiprocessing.pool.ThreadPool -- the rest is identical. GIL is released during IO in CPython and therefore threads should speed up your code if most of the time is spent in urlopen().read().

Here's a complete code example.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!