amazon-s3

Best Way to Monitor Customer Usage of AWS Lambda

偶尔善良 提交于 2021-02-05 08:46:27
问题 I have newly created an API service that is going to be deployed as a pilot to a customer. It has been built with AWS API Gateway, AWS Lambda, and AWS S3. With a SaaS pricing model, what's the best way for me to monitor this customer's usage and cost? At the moment, I have made a unique API Gateway, Lambda function, and S3 bucket specific to this customer. Is there a good way to create a dashboard that allows me (and perhaps the customer) to detail this monitoring? Additional question, what's

Aws get S3 Bucket Size using java api

安稳与你 提交于 2021-02-05 07:58:09
问题 I have searched on google about efficient way to get metadata about S3 bucket like its size and number of files in it. I found this link discussing such problem. But it's for PHP and aws cli using cloud-watch. I want to know is there some java api to fetch the s3 bucket metadata? Thanks 回答1: You can find the extensive documentation of the AWS S3 Java library here: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/overview-summary.html Answering your question, you can use getSize() for

AWS Data Pipeline: Issue with permissions S3 Access for IAM role

梦想的初衷 提交于 2021-02-05 07:19:26
问题 I'm using the Load S3 data into RDS MySql table template in AWS Data Pipeline to import csv's from a S3 bucket into our RDS MySql. However I (as IAM user with full-admin rights) run into a warning I can't solve: Object:Ec2Instance - WARNING: Could not validate S3 Access for role. Please ensure role ('DataPipelineDefaultRole') has s3:Get*, s3:List*, s3:Put* and sts:AssumeRole permissions for DataPipeline. Google told me not to use the default policies for the DataPipelineDefaultRole and

How to find out who uploaded data in S3 bucket

五迷三道 提交于 2021-02-04 19:09:21
问题 I have a shared S3 bucket with other members in the team. Is there a way to find out who uploaded a particular file in the bucket? 回答1: Buckets are owned by an AWS Account, not individual users. When a user makes an API call, AWS authenticates the user and verifies that they have permission to make the call. After that, it is the Account that owns the content. (Although objects can sometimes have a specific owner, which gets even more confusing.) You can now use AWS CloudTrail to track data

Python Lambda to send files uploaded to s3 as email attachments

和自甴很熟 提交于 2021-02-04 16:27:46
问题 We have an online form that gives people the option to upload multiple files. The form is built by a third party, so I don't have any involvement with them. When someone uploads files using the form it dumps the files into a new folder within an s3 bucket. I want to be able to do the following: Get the files triggered by form filler's upload Attach the files to an email Send the email to specific people. I have done quite a lot of research, but I'm still new to coding and am trying to use

How to serve an AWS EC2 instance from S3 subdirectory

馋奶兔 提交于 2021-01-29 20:30:55
问题 I have a website hosted on AWS S3, served over Cloudfront. www.mysite.com I am hosting a blog on an EC2 instance. I would like to have this blog served from www.mysite.com/blog For SEO purposes I do not want it to be www.blog.mysite.com Is it possible to achieve this with only S3 and Couldfront? I have played around with S3 redirects and Lambda@edge but the documentation on these is not great. In the case of Lambda@edge I want to avoid further complexity if I can. S3 redirects work but the

How to submit a SPARK job of which the jar is hosted in S3 object store

夙愿已清 提交于 2021-01-29 20:22:01
问题 I have a SPARK cluster with Yarn, and I want to put my job's jar into a S3 100% compatible Object Store. If I want to submit the job, I search from google and seems that just simply as this way: spark-submit --master yarn --deploy-mode cluster <...other parameters...> s3://my_ bucket/jar_file However the S3 Object Store required user name and password to access. So how I can config those credential information to let SPARRK download the jar from S3? Many thanks! 回答1: You can use Default

Cannot copy large (5 Gb) files with awscli 1.5.4

て烟熏妆下的殇ゞ 提交于 2021-01-29 17:45:45
问题 I have problem with aws-cli, I did a yum update, it updated awscli (among other things) and now awscli fails on large files (e.g. 5.1 Gb) with SignatureDoesNotMatch. The exact same command (to same bucket) with smaller files works. The big file still works if I use boto from python. It copies all parts but two it looks like (i.e. it was counted up to 743 of 745 parts), and then the error message comes. Looks like a bug in awscli? I could not find anything about it when I google around though.

Trying to restore glacier deep archive to different s3 bucket

﹥>﹥吖頭↗ 提交于 2021-01-29 17:40:14
问题 I am trying to restore the glacier deep archive to a different s3 bucket, but when I run the below command getting error : fatal error: An error occurred (404) when calling the HeadObject operation: Key "cf-ant-prod" does not exist aws s3 cp s3://xxxxxxx/cf-ant-prod s3://xxxxxxx/atest --force-glacier-transfer --storage-class STANDARD --profile xxx 来源: https://stackoverflow.com/questions/63830307/trying-to-restore-glacier-deep-archive-to-different-s3-bucket

How to submit a SPARK job of which the jar is hosted in S3 object store

时光毁灭记忆、已成空白 提交于 2021-01-29 16:35:56
问题 I have a SPARK cluster with Yarn, and I want to put my job's jar into a S3 100% compatible Object Store. If I want to submit the job, I search from google and seems that just simply as this way: spark-submit --master yarn --deploy-mode cluster <...other parameters...> s3://my_ bucket/jar_file However the S3 Object Store required user name and password to access. So how I can config those credential information to let SPARRK download the jar from S3? Many thanks! 回答1: You can use Default