amazon-s3

How to upload audio file to s3 via api gateway?

这一生的挚爱 提交于 2020-01-07 07:48:13
问题 I created an API in API Gateway to upload audio files to s3, the file is sending from local PC as multipart/form-data. API integration request is shown below In URL Path Parameters, added bucket as param and directly added the bucket name When I try to upload the file I get an error response, body: '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InvalidArgument</Code><Message>x-amz-content-sha256 must be UNSIGNED-PAYLOAD, STREAMING-AWS4-HMAC-SHA256-PAYLOAD, or a valid sha256 value.<

Glob pattern with amazon s3

こ雲淡風輕ζ 提交于 2020-01-07 07:11:16
问题 I want to move files from one s3 bucket to another s3 bucket.I want to move only files whose name starts with "part".I can do it by using java.But is it possible to do it with amazon CLI. Can we use GlobPattern in CLI. my object name are like: part0000 part0001 回答1: Yes, this is possible through the aws CLI, using the --include and --exclude options. As an example, you can use the aws s3 sync command to sync your part files: aws s3 sync --exclude '*' --include 'part*' s3://my-amazing-bucket/

Is there a way to avoid this error: (node) warning: Recursive process.nextTick detected

北城以北 提交于 2020-01-07 06:57:21
问题 I'm seeing a bunch of questions around this error but no one seems to have an actual answer and none of the causes listed by the other posters apply to my case. I've tracked it down to happening when I upload a particularly large file (50mb or more) from my server over to Amazon's S3. Somewhere in that process I get several hundred instances of (node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral. and

Amazon S3 images cache-control not being applied

故事扮演 提交于 2020-01-07 06:17:56
问题 I searched all over and found a method to cache images on Amazon S3. Whenever I upload an image, I add a meta element of cache-control and then set max-age=86400 . However, on any sort of speed test site it says that my images do not have a cache applied to them. I am not sure if it matters, but I have CloudFront linked to this S3 bucket. Sorry, but completely new to AWS. Anyone know why my images may not be caching? 回答1: on any sort of speed test site it says that my images do not have a

S3 pre-signed url - check if url was used?

拟墨画扇 提交于 2020-01-07 05:12:05
问题 I'm using S3 pre-signed url for uploading images directly from client-side. I would like to be able to push a message to SQS queue only when I'm sure that the url was used and a new image was uploaded. Given a pre-signed url, how can I validate if it was used? 回答1: Do you really need to know when a pre-signed URL has been used? Or can you just send a new message to SQS whenever a new object is uploaded to your S3 bucket? Since you are restricting uploads to using pre-signed URLs wouldn't that

How to perform Client Side Encryption in iOS AWS SDK?

三世轮回 提交于 2020-01-07 02:31:28
问题 is it Available ? or should I choose my own algorithm to encrypt data and upload it to the S3-bucket? I'm having a requirement to create an application in multiplatform(android/C#/ios) in which we have to encrypt data and Store it to the server side . . . I've tried this library to encrypt data, but in iOS side, I'm having different results than others . . . 回答1: I uploaded a video on aws s3 bucket with client side encryption using below code. We need the AES256 key and md5 key when going to

How to make s3 bucket public in python

时光总嘲笑我的痴心妄想 提交于 2020-01-07 01:55:06
问题 I have an already created bucket in amazon s3. I want to make its content publicly available without any authentication. I have tried from documents of boto To set a canned ACL for a bucket, use the set_acl method of the Bucket object. The argument passed to this method must be one of the four permissable canned policies named in the list CannedACLStrings contained in acl.py. For example, to make a bucket readable by anyone: b.set_acl('public-read') It is not working. I still cant access my

Amazon s3 URL + being encoded to %2?

拈花ヽ惹草 提交于 2020-01-06 22:27:36
问题 I've got Amazon s3 integrated with my hosting account at WP Engine. Everything works great except when it comes to files with + characters in them. For example in the following case when a file is named: test+2.pdf http://support.mcsolutions.com/wp-content/uploads/2011/11/test+2.pdf = does not work. The following URL is the amazon URL. Notice the + charcter is encoded. Is there a way to prevent/change this? http://mcsolutionswpe.s3.amazonaws.com/mcsupport/wp-content/uploads/2011/11/test%2b2

images uploaded to aws, but can't be viewed in the view

拜拜、爱过 提交于 2020-01-06 21:13:53
问题 I have just integrated AWS with my rails/heroku app and I am using paperclip. I am able to upload files (photo's) and see them in AWS, however they are not showing up in the view. I am not getting any errors, and have not found a working solution in other posts. It seems I am able to view the image in a browser, and that permissions are set to public: I suspect that I may have my region wrong, in the url of my aws dashboard the region says region=us-west-2 yet googling and reading through

set secret keys for amazon aws3

。_饼干妹妹 提交于 2020-01-06 20:14:11
问题 I use fog and carrierwave. Right now I just have simple uploader that I run locally: CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => ENV['S3_ACCESS_KEY'], :aws_secret_access_key => ENV['S3_SECRET_KEY'], :region => 'us-west-1', # Change this for } config.fog_directory = "bucket-main" end But now I have a question where should I save my secret keys. On heroku environment I could print like this $ heroku config:set S3_ACCESS_KEY