Concurrent downloads - Python

☆樱花仙子☆ 提交于 2019-11-28 04:47:30

Speeding up crawling is basically Eventlet's main use case. It's deeply fast -- we have an application that has to hit 2,000,000 urls in a few minutes. It makes use of the fastest event interface on your system (epoll, generally), and uses greenthreads (which are built on top of coroutines and are very inexpensive) to make it easy to write.

Here's an example from the docs:

urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
     "https://wiki.secondlife.com/w/images/secondlife.jpg",
     "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]

import eventlet
from eventlet.green import urllib2  

def fetch(url):
  body = urllib2.urlopen(url).read()
  return url, body

pool = eventlet.GreenPool()
for url, body in pool.imap(fetch, urls):
  print "got body from", url, "of length", len(body)

This is a pretty good starting point for developing a more fully-featured crawler. Feel free to pop in to #eventlet on Freenode to ask for help.

[update: I added a more-complex recursive web crawler example to the docs. I swear it was in the works before this question was asked, but the question did finally inspire me to finish it. :)]

While threading is certainly a possibility, I would instead suggest asyncore -- there's an excellent example here which shows exactly the simultaneous fetching of two URLs (easy to generalize to any list of URLs!).

Here is an article on threading which uses url fetching as an example.

Nowadays there are excellent Python libs you might want to use - urllib3 and requests

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!