问题
Is boto3.Bucket.upload_file blocking or non-blocking?
i.e. if I were to run the following
bucket = session.Bucket(bucket_name)
bucket.upload_file(Key=s3_key, Filename=source_path)
os.remove(source_path)
Do I have a race condition, depending on the size of the file? Or is upload guaranteed to complete before file deletion?
回答1:
The current boto3 upload_file
is blocking. As mootmoot said, you should definitely implement some error handling to be safe if you delete the file.
回答2:
The fact upload_file()
uses S3Transfer indicates the call is non-blocking. You need to track the progress (S3Transfer's APIs) and delete the file after making sure the transfer is complete.
回答3:
Whether blocking or unblocking, you SHOULD NOT rely on the API alone when things went bad. You MUST add exception handling if the upload fail in the middle for any reason(e.g. admin decide to restart the router when you doing the upload).
bucket = session.Bucket(bucket_name)
try :
bucket.upload_file(Key=s3_key, Filename=source_path)
os.remove(source_path)
except :
raise
Another good practice to upload file to S3 is adding additional Metadata.
bucket.upload_file(
Key=s3_key,
Filename=source_path,
extra_args={'Metadata': {'source_path': source_path}}
)
Adding event to S3 Bucket to act on success PUT action also let you create cleanup process if there is success upload but failure on local file removal.(imagine the file is locked or the file is given Read-only access).
回答4:
Boto3 does not have support for async calls, so the function is blocking.
See conversations regarding async + boto3 here:
https://github.com/boto/boto3/issues/648
https://github.com/boto/boto3/issues/746
来源:https://stackoverflow.com/questions/37603148/is-boto3-bucket-upload-file-blocking-or-non-blocking