amazon-s3

CloudFront redirect request with Lambda to trailing slash

时间秒杀一切 提交于 2020-08-26 08:13:37
问题 I have a static SPA page that is using S3 as it's origin with CloudFront. If I visit www.domain.com/page, I will get the CloudFront path prefixed bucket-directory/prod/page/ which is expected. Is it possible to capture the path in AWS Lambda and append the trailing slash to a request, so it becomes, www.domain.com/page > [Lambda] > www.domain.com/page/ I've been looking and trying the following resources to little avail: http://blog.rowanudell.com/redirects-in-serverless/ http://docs.aws

CloudFront redirect request with Lambda to trailing slash

一曲冷凌霜 提交于 2020-08-26 08:10:21
问题 I have a static SPA page that is using S3 as it's origin with CloudFront. If I visit www.domain.com/page, I will get the CloudFront path prefixed bucket-directory/prod/page/ which is expected. Is it possible to capture the path in AWS Lambda and append the trailing slash to a request, so it becomes, www.domain.com/page > [Lambda] > www.domain.com/page/ I've been looking and trying the following resources to little avail: http://blog.rowanudell.com/redirects-in-serverless/ http://docs.aws

How do you allow granting public read access to objects uploaded to AWS S3?

淺唱寂寞╮ 提交于 2020-08-26 08:07:27
问题 I have created a policy that allows access to a single S3 bucket in my account. I then created a group that has only this policy and a user that is part of that group. The user can view, delete and upload files to the bucket, as expected. However, the user does not seem to be able to grant public read access to uploaded files . When the Grant public read access to this object(s) option is selected, the upload fails. The bucket is hosting a static website and I want to allow the frontend

How do you allow granting public read access to objects uploaded to AWS S3?

只愿长相守 提交于 2020-08-26 08:06:13
问题 I have created a policy that allows access to a single S3 bucket in my account. I then created a group that has only this policy and a user that is part of that group. The user can view, delete and upload files to the bucket, as expected. However, the user does not seem to be able to grant public read access to uploaded files . When the Grant public read access to this object(s) option is selected, the upload fails. The bucket is hosting a static website and I want to allow the frontend

High memory usage when uploading a multipart file to Amazon S3 via streaming?

此生再无相见时 提交于 2020-08-26 07:29:28
问题 The method below in my Java Spring application directly streams and uploads a file to an Amazon S3 bucket. I have researched that using streams will make the uploading of large files (> 100MB videos for my use case) to be more memory efficient. When testing the method with a 25MB file, the memory usage of my Java Spring application in a Kubernetes cluster setup spiked up by 200MB! I also tried a file that was 200MB and the memory spiked up again to ~2GB. There were no out of memory exceptions

High memory usage when uploading a multipart file to Amazon S3 via streaming?

烂漫一生 提交于 2020-08-26 07:29:25
问题 The method below in my Java Spring application directly streams and uploads a file to an Amazon S3 bucket. I have researched that using streams will make the uploading of large files (> 100MB videos for my use case) to be more memory efficient. When testing the method with a 25MB file, the memory usage of my Java Spring application in a Kubernetes cluster setup spiked up by 200MB! I also tried a file that was 200MB and the memory spiked up again to ~2GB. There were no out of memory exceptions

Athena query fails with boto3 (S3 location invalid)

别等时光非礼了梦想. 提交于 2020-08-24 17:32:09
问题 I'm trying to execute a query in Athena, but it fails. Code: client.start_query_execution(QueryString="CREATE DATABASE IF NOT EXISTS db;", QueryExecutionContext={'Database': 'db'}, ResultConfiguration={ 'OutputLocation': "s3://my-bucket/", 'EncryptionConfiguration': { 'EncryptionOption': 'SSE-S3' } }) But it raises the following exception: botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: The S3 location

Athena query fails with boto3 (S3 location invalid)

流过昼夜 提交于 2020-08-24 17:29:22
问题 I'm trying to execute a query in Athena, but it fails. Code: client.start_query_execution(QueryString="CREATE DATABASE IF NOT EXISTS db;", QueryExecutionContext={'Database': 'db'}, ResultConfiguration={ 'OutputLocation': "s3://my-bucket/", 'EncryptionConfiguration': { 'EncryptionOption': 'SSE-S3' } }) But it raises the following exception: botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: The S3 location

Configuring ActiveStorage to use S3 with IAM role

生来就可爱ヽ(ⅴ<●) 提交于 2020-08-24 09:17:26
问题 I'm trying to configure ActiveStorage to use S3 bucket as a storage backend however I don't want to pass any of access_key_id , secret_access_key , region . Instead, I'd like to use previously defined IAM role. Such configuration is mentioned here. It reads (I've added bold): If you want to use environment variables, standard SDK configuration files, profiles, IAM instance profiles or task roles, you can omit the access_key_id, secret_access_key, and region keys in the example above. The

How can I use AWS Boto3 to get Cloudwatch metric statistics?

核能气质少年 提交于 2020-08-24 08:18:22
问题 I'm working on a Python 3 script designed to get S3 space utilization statistics from AWS CloudFront using the Boto3 library. I started with the AWS CLI and found I could get what I'm after with a command like this: aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2017-03-06T00:00:00Z --end-time 2017-03-07T00:00:00Z --statistics Average --unit Bytes --region us-west-2 --dimensions Name=BucketName,Value=foo-bar Name=StorageType,Value