I have a JPG file with 800KB. I try to upload to S3 and keep getting timeout error. Can you please figure what is wrong? 800KB is rather small for upload.
<
Is it possible that IOUtils.toByteArray is draining your input stream so that there is no more data to be read from it when the service call is made? In that case a stream.reset() would fix the issue.
But if you're just uploading a file (as opposed to an arbitrary InputStream), you can use the simpler form of AmazonS3.putObject() that takes a File, and then you won't need to compute the content length at all.
http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#putObject(java.lang.String, java.lang.String, java.io.File)
This will automatically retry any such network errors several times. You can tweak how many retries the client uses by instantiating it with a ClientConfiguration object.
http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setMaxErrorRetry(int)
If your endpoint is behind a VPC it will also silently error out. You can add a new VPC endpoint here for s3
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
I would like to thank Gabriel for that answer. I implemented the endpoint for S3 using rclone and saw my errors go from the 100's to zero. This makes ECS to S3 transfers go over their internal network - which is significantly faster and reliable. The only other point I would add, is to never try to backup network drives to S3 - world of hurt there.
This command (and options) has been working perfectly for me all day: rclone --progress copy /home/sound/effects/$formatName/$fileType/ $S3CONFIG:$S3BUCKET/$formatName-$fileType/$fileType/ --contimeout 10m0s --max-backlog 100 --transfers 8