Like urllib2
, requests
is blocking.
But I wouldn't suggest using another library, either.
The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)
If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures
page is almost exactly what you need; just change the urllib
calls to requests
calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.
If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlet
s instead of Thread
s.
grequests, which evolved out of the old async support directly in requests
, effectively does the gevent
+ requests
wrapping for you. And for the simplest cases, it's great. But for anything non-trivial, I find it easier to read explicit gevent
code. Your mileage may vary.
Of course if you need to do something really fancy, you probably want to go to twisted
, tornado
, or tulip
(or wait a few months for tulip
to be part of the stdlib).