amazon-s3

Sending file direct from browser to S3 but changing file name

十年热恋 提交于 2021-02-07 13:17:09
问题 I am using signed authorized S3 uploads so that users can upload files directly from their browser to S3 bypassing my server. This presently works, but the file name is the same as on the user's machine. I'd like to save it on S3 as a different name. The formdata I post to amazon looks like this: var formData = new FormData(); formData.append('key', targetPath); // e.g. /path/inside/bucket/myFile.mov formData.append('AWSAccessKeyId', s3Auth.AWSAccessKeyId); // aws public key formData.append(

AWS SDK for .NET can't access credentials with IIS

二次信任 提交于 2021-02-07 12:51:05
问题 I'm having some trouble accessing the AWS credentials in the SDK Store, but it seems to only be a problem when running under IIS. If I hit the same code by invoking an NUnit test with ReSharper the dependency injection works and the S3 client is able to authenticate. IAmazonS3 s3Client = new AmazonS3Client(); Has anyone else run into this problem? How were you able to get the dependency injection to work? [Edit] The credential file approach has been recommended for use with IIS because the

Multiple S3 buckets in the same CloudFront distribution

懵懂的女人 提交于 2021-02-07 12:28:28
问题 I created a Cloudfront distribution with a CNAME images.domain.com with SSL, and I have 2 S3 buckets: one for user uploads, one for product pictures The default bucket is the uploads bucket I would like to use the same CloudFront for both buckets So I added the 2 buckets as origins and created a "Behavior", with the path /products/* using my product bucket as origin My "Behaviors" are : /products/* to: products bucket (precedence = 0) Default (*) to: uploads bucket (precedence = 1) When I go

Multiple S3 buckets in the same CloudFront distribution

这一生的挚爱 提交于 2021-02-07 12:28:27
问题 I created a Cloudfront distribution with a CNAME images.domain.com with SSL, and I have 2 S3 buckets: one for user uploads, one for product pictures The default bucket is the uploads bucket I would like to use the same CloudFront for both buckets So I added the 2 buckets as origins and created a "Behavior", with the path /products/* using my product bucket as origin My "Behaviors" are : /products/* to: products bucket (precedence = 0) Default (*) to: uploads bucket (precedence = 1) When I go

Enabling POST/PUT/DELETE on AWS CloudFront?

只愿长相守 提交于 2021-02-07 12:15:24
问题 In AWS CloudFront I set this within: "Allowed HTTP Methods" in the "Default Cache Behavior Settings" area: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE My CloudFront is linked to an AWS S3 bucket. So I set the AWS S3 CORS configuration to: <?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod>

Enabling POST/PUT/DELETE on AWS CloudFront?

落爺英雄遲暮 提交于 2021-02-07 12:13:43
问题 In AWS CloudFront I set this within: "Allowed HTTP Methods" in the "Default Cache Behavior Settings" area: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE My CloudFront is linked to an AWS S3 bucket. So I set the AWS S3 CORS configuration to: <?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod>

Set content type in S3 when attaching via Paperclip 4?

旧街凉风 提交于 2021-02-07 11:59:15
问题 I'm trying to attach CSV files to a Rails3 model using paperclip 4.1.1, but I'm having trouble getting the content-type as reported by S3 to be text/csv (instead I am getting text/plain ). When I subsequently download the file from S3, the extension is getting changed to match the content-type instead of preserving the original extension (so test.csv is downloaded as test.txt). From what I can see, when you upload a file, the FileAdapter will cache the content-type on creation with whatever

ActiveStorage & S3: Make files public

限于喜欢 提交于 2021-02-07 10:33:22
问题 In my rails application, some images should be public and others should be private . (I don't mind to have all of them public). Currently, when calling ActiveStorage's service_url (when using s3 ), a new presigned url is being generated. I totally understand this, but it's not what I want to do. Right now, generating the presigned url is taking too much time: Example: Getting 10 records: ActiveRecord: 52.3ms Generating the JSON: 1,790ms If we dig deeper, we see the following: S3 Storage (23

AWS SDK can not read environment variables

醉酒当歌 提交于 2021-02-07 09:19:12
问题 I am setting AWS_ env variables as below for Jenkins sudo apt-get update -y sudo apt-get install -y python3 python-pip python-devel sudo pip install awscli S3_LOGIN=$(aws sts assume-role --role-arn rolename --role-session-name s3_session) export AWS_CREDENTIAL_PROFILES_FILE=~/.aws/credentials export AWS_ACCESS_KEY_ID=$(echo ${S3_LOGIN}| jq --raw-output '.Credentials|"\(.AccessKeyId)"') export AWS_SECRET_ACCESS_KEY=$(echo ${S3_LOGIN} | jq --raw-output '.Credentials|"\(.SecretAccessKey)"')

AWS SDK can not read environment variables

做~自己de王妃 提交于 2021-02-07 09:10:38
问题 I am setting AWS_ env variables as below for Jenkins sudo apt-get update -y sudo apt-get install -y python3 python-pip python-devel sudo pip install awscli S3_LOGIN=$(aws sts assume-role --role-arn rolename --role-session-name s3_session) export AWS_CREDENTIAL_PROFILES_FILE=~/.aws/credentials export AWS_ACCESS_KEY_ID=$(echo ${S3_LOGIN}| jq --raw-output '.Credentials|"\(.AccessKeyId)"') export AWS_SECRET_ACCESS_KEY=$(echo ${S3_LOGIN} | jq --raw-output '.Credentials|"\(.SecretAccessKey)"')