问题
Python beginner here. I want to be able to timeout my download of a video file if the process takes longer than 500 seconds.
import urllib
try:
urllib.urlretrieve ("http://www.videoURL.mp4", "filename.mp4")
except Exception as e:
print("error")
How do I amend my code to make that happen?
回答1:
Better way is to use requests
so you can stream the results and easily check for timeouts:
import requests
# Make the actual request, set the timeout for no data to 10 seconds and enable streaming responses so we don't have to keep the large files in memory
request = requests.get('http://www.videoURL.mp4', timeout=10, stream=True)
# Open the output file and make sure we write in binary mode
with open('filename.mp4', 'wb') as fh:
# Walk through the request response in chunks of 1024 * 1024 bytes, so 1MiB
for chunk in request.iter_content(1024 * 1024):
# Write the chunk to the file
fh.write(chunk)
# Optionally we can check here if the download is taking too long
回答2:
urlretrieve
does not have that option. But you can easily perform your example with the help of urlopen
and writing the result in a file, like so:
request = urllib.urlopen("http://www.videoURL.mp4", timeout=500)
with open("filename.mp4", 'wb') as f:
try:
f.write(request.read())
except:
print("error")
That's if you are using Python 3. If you are using Python 2, you should rather use urllib2.
来源:https://stackoverflow.com/questions/32763720/timeout-a-file-download-with-python-urllib