I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error:
$> s3cmd put thef
I tried all of the other answers but none worked. It looks like s3cmd is fairly sensitive. In my case the s3 bucket was in the EU. Small files would upload but when it got to ~60k it always failed.
When I changed ~/.s3cfg it worked.
Here are the changes I made:
host_base = s3-eu-west-1.amazonaws.com
host_bucket = %(bucket)s.s3-eu-west-1.amazonaws.com
I experienced the same issue, it turned out to be a bad bucket_location
value in ~/.s3cfg
.
This blog post lead me to the answer.
If the bucket you’re uploading to doesn’t exist (or you miss typed it ) it’ll fail with that error. Thank you generic error message. - See more at: http://jeremyshapiro.com/blog/2011/02/errno-32-broken-pipe-in-s3cmd/#sthash.ZbGwj5Ex.dpuf
After inspecting my ~/.s3cfg
is saw that it had:
bucket_location = Sydney
Rather than:
bucket_location = ap-southeast-2
Correcting this value to use the proper name(s) solved the issue.
I've just come across this problem myself. I've got a 24GB .tar.gz file to put into S3.
Uploading smaller pieces will help.
There is also ~5GB file size limit, and so I'm splitting the file into pieces, that can be re-assembled when the pieces are downloaded later.
split -b100m ../input-24GB-file.tar.gz input-24GB-file.tar.gz-
The last part of that line is a 'prefix'. Split will append 'aa', 'ab', 'ac', etc to it. The -b100m means 100MB chunks. A 24GB file will end up with about 240 100mb parts, called 'input-24GB-file.tar.gz-aa' to 'input-24GB-file.tar.gz-jf'.
To combine them later, download them all into a directory and:
cat input-24GB-file.tar.gz-* > input-24GB-file.tar.gz
Taking md5sums of the original and split files and storing that in the S3 bucket, or better, if its not so big, using a system like parchive to be able to check, even fix some download problems could also be valuable.
In my case the reason of the failure was the server's time being ahead of the S3 time. Since I used GMT+4 in my server (located in US East) and I was using Amazon's US East storage facility.
After adjusting my server to the US East time, the problem was gone.
This error occurs when Amazon returns an error: they seem to then disconnect the socket to keep you from uploading gigabytes of request to get back "no, that failed" in response. This is why for some people are getting it due to clock skew, some people are getting it due to policy errors, and others are running into size limitations requiring the use of the multi-part upload API. It isn't that everyone is wrong, or are even looking at different problems: these are all different symptoms of the same underlying behavior in s3cmd.
As most error conditions are going to be deterministic, s3cmd's behavior of throwing away the error message and retrying slower is kind of crazy unfortunate :(. Itthen To get the actual error message, you can go into /usr/share/s3cmd/S3/S3.py (remembering to delete the corresponding .pyc so the changes are used) and add a print e
in the send_file function's except Exception, e:
block.
In my case, I was trying to set the Content-Type of the uploaded file to "application/x-debian-package". Apparently, s3cmd's S3.object_put 1) does not honor a Content-Type passed via --add-header and yet 2) fails to overwrite the Content-Type added via --add-header as it stores headers in a dictionary with case-sensitive keys. The result is that it does a signature calculation using its value of "content-type" and then ends up (at least with many requests; this might be based on some kind of hash ordering somewhere) sending "Content-Type" to Amazon, leading to the signature error.
In my specific case today, it seems like -M would cause s3cmd to guess the right Content-Type, but it seems to do that based on filename alone... I would have hoped that it would use the mimemagic database based on the contents of the file. Honestly, though: s3cmd doesn't even manage to return a failed shell exit status when it fails to upload the file, so combined with all of these other issues it is probably better to just write your own one-off tool to do the one thing you need... it is almost certain that in the end it will save you time when you get bitten by some corner-case of this tool :(.
s3cmd version 1.1.0-beta3 or better will automatically use multipart uploads to allow sending up arbitrarily large files (source). You can control the chunk size it uses, too. e.g.
s3cmd --multipart-chunk-size-mb=1000 put hugefile.tar.gz s3://mybucket/dir/
This will do the upload in 1 GB chunks.