amazon s3 upload file time out

后端 未结 3 515
北恋
北恋 2020-12-18 22:56

I have a JPG file with 800KB. I try to upload to S3 and keep getting timeout error. Can you please figure what is wrong? 800KB is rather small for upload.

<
相关标签:
3条回答
  • 2020-12-18 23:39

    Is it possible that IOUtils.toByteArray is draining your input stream so that there is no more data to be read from it when the service call is made? In that case a stream.reset() would fix the issue.

    But if you're just uploading a file (as opposed to an arbitrary InputStream), you can use the simpler form of AmazonS3.putObject() that takes a File, and then you won't need to compute the content length at all.

    http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#putObject(java.lang.String, java.lang.String, java.io.File)

    This will automatically retry any such network errors several times. You can tweak how many retries the client uses by instantiating it with a ClientConfiguration object.

    http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setMaxErrorRetry(int)

    0 讨论(0)
  • 2020-12-18 23:50

    If your endpoint is behind a VPC it will also silently error out. You can add a new VPC endpoint here for s3

    https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/

    0 讨论(0)
  • 2020-12-18 23:51

    I would like to thank Gabriel for that answer. I implemented the endpoint for S3 using rclone and saw my errors go from the 100's to zero. This makes ECS to S3 transfers go over their internal network - which is significantly faster and reliable. The only other point I would add, is to never try to backup network drives to S3 - world of hurt there.

    This command (and options) has been working perfectly for me all day: rclone --progress copy /home/sound/effects/$formatName/$fileType/ $S3CONFIG:$S3BUCKET/$formatName-$fileType/$fileType/ --contimeout 10m0s --max-backlog 100 --transfers 8

    0 讨论(0)
提交回复
热议问题