amazon-s3

S3 data partitioning for bucket logging files

有些话、适合烂在心里 提交于 2020-04-11 06:29:03
问题 I have a s3 bucket "ABC" and logging is enabled at this bucket and logs stored in "ABC-logs". There are many files comes in "ABC-logs" per day. Now I want to segregate these logs year wise. For ex: s3://ABC-logs/year=2015 s3://ABC-logs/year=2016 s3://ABC-logs/year=2017 What is the best way to do this. I thought to do it via awscli but each year end, I will have to change bucket logging folder. 回答1: The traditional way to do this is via an Amazon EMR cluster . You can use Hive to create an

S3 data partitioning for bucket logging files

拟墨画扇 提交于 2020-04-11 06:27:12
问题 I have a s3 bucket "ABC" and logging is enabled at this bucket and logs stored in "ABC-logs". There are many files comes in "ABC-logs" per day. Now I want to segregate these logs year wise. For ex: s3://ABC-logs/year=2015 s3://ABC-logs/year=2016 s3://ABC-logs/year=2017 What is the best way to do this. I thought to do it via awscli but each year end, I will have to change bucket logging folder. 回答1: The traditional way to do this is via an Amazon EMR cluster . You can use Hive to create an

How to create a bucket with Public Read Access?

三世轮回 提交于 2020-04-11 05:09:38
问题 I´d like to enable Public Read-Access on all items in my Bucket that are in the "public" folder in the serverless.yml file. Currently this is definition code i use to declare my bucket. Its a bit of copy and paste from one of the serverless-stack examples. Resources: AttachmentsBucket: Type: AWS::S3::Bucket Properties: AccessControl: PublicRead # Set the CORS policy BucketName: range-picker-bucket-${self:custom.stage} CorsConfiguration: CorsRules: - AllowedOrigins: - '*' AllowedHeaders: - '*'

Access Denied for index.html Amazon S3 static website

蓝咒 提交于 2020-04-10 03:14:01
问题 I've set up an example static website on Amazon S3 and I added a custom folder to it with file inside: custom-folder/index.html , but I'm getting Access Denied error when trying to access url /custom-folder . The index document is configured to be index.html , so S3 should serve index.html when I'm accessing /custom-folder url, but it doesn't work. How can I fix this? 回答1: It seems you are using the wrong URL to access the bucket. For example, when you enable to static website hosting feature

Access Denied for index.html Amazon S3 static website

六眼飞鱼酱① 提交于 2020-04-10 03:12:06
问题 I've set up an example static website on Amazon S3 and I added a custom folder to it with file inside: custom-folder/index.html , but I'm getting Access Denied error when trying to access url /custom-folder . The index document is configured to be index.html , so S3 should serve index.html when I'm accessing /custom-folder url, but it doesn't work. How can I fix this? 回答1: It seems you are using the wrong URL to access the bucket. For example, when you enable to static website hosting feature

How to read a s3 file for all versions

白昼怎懂夜的黑 提交于 2020-03-28 06:39:12
问题 Say, I have a s3 bucket with name "fruits" inside that there is a s3key "apples/55.txt" with 50 revisions. When you read the file using S3Object in Java it prints the contents of lastest revision only. But, I want to read the file content of current and all previous versions. How to do this ? Is it like first get all the revision number, read the file using revision number add them to a List and print ?? BufferedReader Bufferread = new BufferedReader(new InputStreamReader(object

Cloudwatch log store costing vs S3 costing

可紊 提交于 2020-03-26 06:44:48
问题 I have an ec2 instance which is running apache application. I have to store my apache log somewhere. For this, I have used two approaches: Cloudwatch Agent to push logs to cloudwatch CronJob to push log file to s3 I have used both of the methods. Both methods suit fine for me. But, here I am little worried about the costing. Which of these will have minimum cost? 回答1: S3 Pricing is basically is based upon three factors: The amount of storage. The amount of data transferred every month. The

Access AWS Athena from Python Lambda in different account

久未见 提交于 2020-03-25 18:21:37
问题 I have two account A and B. S3 Buckets and Athena View is in account A and Lambda is in Account B. I want to call Athena from my Lambda. I have also allowed Lambda Execution Role in S3 Bucket Policy. When I try to call Database from Lambda, it gives me error as 'Status': {'State': 'FAILED', 'StateChangeReason': 'SYNTAX_ERROR: line 1:15: Schema db_name does not exist' Below is my Lambda Code: import boto3 import time def lambda_handler(event, context): athena_client = boto3.client('athena')

Access AWS Athena from Python Lambda in different account

本秂侑毒 提交于 2020-03-25 18:21:10
问题 I have two account A and B. S3 Buckets and Athena View is in account A and Lambda is in Account B. I want to call Athena from my Lambda. I have also allowed Lambda Execution Role in S3 Bucket Policy. When I try to call Database from Lambda, it gives me error as 'Status': {'State': 'FAILED', 'StateChangeReason': 'SYNTAX_ERROR: line 1:15: Schema db_name does not exist' Below is my Lambda Code: import boto3 import time def lambda_handler(event, context): athena_client = boto3.client('athena')

Alpakka S3 connector does not provide complete file

送分小仙女□ 提交于 2020-03-25 17:52:14
问题 Downloading a file from S3 storage using Alpakka S3 connector does not provide the whole file, only a part of it. Assuming settings and attributes are correct, since upload works fine, I wonder what could be the reason. val s3File: Source[Option[(Source[ByteString, NotUsed], ObjectMetadata)], NotUsed] = S3.download(bucketName, fileName).withAttributes(attributes) s3File.runWith(Sink.head)(materializer) flatMap { case Some(result) => result._1.runWith(Sink.head)(materializer) map { data =>