Here is my code:
import csv
import requests
with requests.Session() as s:
s.post(url, data=payload)
download = s.get(\'url that directly download a
To simplify these answers, and increase performance when downloading a large file, the below may work a bit more efficiently.
import requests
from contextlib import closing
import csv
url = "http://download-and-process-csv-efficiently/python.csv"
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(r.iter_lines(), delimiter=',', quotechar='"')
for row in reader:
print row
By setting stream=True in the GET request, when we pass r.iter_lines() to csv.reader(), we are passing a generator to csv.reader(). By doing so, we enable csv.reader() to lazily iterate over each line in the response with for row in reader.
This avoids loading the entire file into memory before we start processing it, drastically reducing memory overhead for large files.