amazon-s3

Can I trigger an ECS/Fargate task from a specific file upload in S3?

主宰稳场 提交于 2021-02-09 02:44:10
问题 I know that I can trigger a task when a file is uploaded (per https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-ECS.html) however, how can I trigger a task when a specific file is uploaded? Amazon seems not to have anticipated people having multiple jobs watching the same bucket for different files :( 回答1: You can accomplish this with CloudWatch Events from CloudTrail Data Events. Head over to CloudTrail, and create a Trail for your account. For Apply trail

Is there a way for a Lambda function to be triggered by multiple S3 buckets?

别说谁变了你拦得住时间么 提交于 2021-02-08 15:13:50
问题 I'm trying to create a Lambda function that will be triggered by any change made to any bucket in the S3 console. Is there a way to tie all create events from every bucket in S3 to my Lambda function? It appears that in the creation of a Lambda function, you can only select one S3 bucket. Is there a way to do this programmatically, if not in the Lambda console? 回答1: There is at least one way: you can setup an s3 event notifications, for each bucket you want to monitor, all pointing to a

Is there a way for a Lambda function to be triggered by multiple S3 buckets?

我是研究僧i 提交于 2021-02-08 15:13:26
问题 I'm trying to create a Lambda function that will be triggered by any change made to any bucket in the S3 console. Is there a way to tie all create events from every bucket in S3 to my Lambda function? It appears that in the creation of a Lambda function, you can only select one S3 bucket. Is there a way to do this programmatically, if not in the Lambda console? 回答1: There is at least one way: you can setup an s3 event notifications, for each bucket you want to monitor, all pointing to a

Passing IAM role to a Docker on EC2

落爺英雄遲暮 提交于 2021-02-08 10:21:51
问题 What is the suggested way to pass IAM role to a Docker container on EC2? I have a mlflow project running in a docker environment on EC2. The python code needs to read and write from S3. The following is the error (sometimes other types of error also indicating no s3 access from the container, for example s3 resourece not found error) botocore.exceptions.ProfileNotFound: The config profile (xxx) could not be found To solve the s3 access issue, I already created an IAM role that allows access

Image upload to s3 does not render

╄→尐↘猪︶ㄣ 提交于 2021-02-08 10:00:38
问题 I am using Expo Image Picker to send a cropped image to s3. The file is being uploaded, but it does not render as an image as its not recognised as one. If I took the blob data and use it in a base64 to image encoder I get the image, so it must be mime or encoding based, here is what I have. I invoke the Expo Image Picker with; let pickerResult = await ImagePicker.launchImageLibraryAsync({ allowsEditing: true, aspect: [1, 1], base64: true, exif: false, }); The params I use to create the

Image upload to s3 does not render

那年仲夏 提交于 2021-02-08 09:59:10
问题 I am using Expo Image Picker to send a cropped image to s3. The file is being uploaded, but it does not render as an image as its not recognised as one. If I took the blob data and use it in a base64 to image encoder I get the image, so it must be mime or encoding based, here is what I have. I invoke the Expo Image Picker with; let pickerResult = await ImagePicker.launchImageLibraryAsync({ allowsEditing: true, aspect: [1, 1], base64: true, exif: false, }); The params I use to create the

direct upload string from browser to s3 without local file

泪湿孤枕 提交于 2021-02-08 05:34:28
问题 I am using javascript, node.js and aws sdk. There are many examples about uploading existing files to S3 directly with signed URL, but now I am trying to upload strings and create a file in S3, without any local saved files. Any suggestion, please? 回答1: Have not tried amazon-web-services , amazon-s3 or aws-sdk , though if you are able to upload File or FormData objects you can create either or both at JavaScript and upload the object. // create a `File` object const file = new File(["abc"],

Amazon S3 EPIPE Error

拟墨画扇 提交于 2021-02-08 05:19:35
问题 UPDATE: got it working from the commandline after add a full access policy permissions to that user. Now when I do it with Node there is no error but I can't see the files in my s3 file manager. I keep getting an EPIPE error using Amazon's S3 service. I am a little stuck and unsure of how to proceed. I am using Node.js with the Knox module. Here is my code: var client = knox.createClient({ key: amazonAccessKeyId, secret: amazonSecretAccessKey, bucket: amazonBucketName }); function moveFiles()

CkRest.AddHeader function does not add a header using Chilkat C++ (“Content-MD5” header using fullRequestBinary PUT)

痴心易碎 提交于 2021-02-08 03:39:30
问题 We are using Chilkat 9.5.0.80 C++ library. There is a certain HTTP header we cannot add to our requests: "Content-MD5". When we add this header like this: m_ckRest.AddHeader("Content-MD5", "any-value-here"); and examine the resulting request*, the "Content-MD5" header is NOT present. However, when we add a header of a different name: m_ckRest.AddHeader("Content-Type", "application/octet-stream"); ... the resulting request DOES contain that header. We are using the "fullRequestBinary" method,

Reading partition columns without partition column names

半城伤御伤魂 提交于 2021-02-08 03:36:05
问题 We have data stored in s3 partitioned in the following structure: bucket/directory/table/aaaa/bb/cc/dd/ where aaaa is the year, bb is the month, cc is the day and dd is the hour. As you can see, there are no partition keys in the path ( year=aaaa , month=bb , day=cc , hour=dd) . As a result, when I read the table into Spark, there is no year , month , day or hour columns. Is there anyway I can read the table into Spark and include the partitioned column without : changing the path names in s3