How does the Requests library compare with the PyCurl performance wise?
My understanding is that Requests is a python wrapper for urllib whereas PyCurl is a python
Focussing on Size -
On my Mac Book Air with 8GB of RAM and a 512GB SSD, for a 100MB file coming in at 3 kilobytes a second (from the internet and wifi), pycurl, curl and the requests library's get function (regardless of chunking or streaming) are pretty much the same.
On a smaller Quad core Intel Linux box with 4GB RAM, over localhost (from Apache on the same box), for a 1GB file, curl and pycurl are 2.5x faster than the 'requests' library. And for requests chunking and streaming together give a 10% boost (chunk sizes above 50,000).
I thought I was going to have to swap requests out for pycurl, but not so as the application I'm making isn't going to have client and server that close.