amazon-s3

Upload & fetch media files from AWS S3 in Flutter

我们两清 提交于 2020-01-13 10:24:10
问题 My flutter app is using firebase as a backend but I need to store media files (photos & videos) in my s3 bucket. The mission is to upload the media retrieved from the image picker into s3 & get back the url, which can then be stored as a string in my firebase database. The problem is a scarcity of aws libraries or api for dart 2. I found 3 in pub, but 2 of them were incompatible with dart 2 & 1 was under development. Has anyone implemented this in flutter using dart 2? Any suggestions are

Accessing Meta Data from AWS S3 with AWS Lambda

牧云@^-^@ 提交于 2020-01-13 09:13:46
问题 I would like to retrieve some meta data I added (using the console x-amz-meta-my_variable) every time I upload an object to S3. I have set up lambda through the console to trigger every time an object is uploaded to my bucket I am wondering if I can use something like variable = event['Records'][0]['s3']['object']['my_variable'] to retrieve this data or if I have to connect back to S3 with the bucket and key and then call some function to retrieve it? Below is the code: from __future__ import

Write to a specific folder in S3 bucket using AWS Kinesis Firehose

安稳与你 提交于 2020-01-13 09:05:52
问题 I would like to be able to send data sent to kinesis firehose based on the content inside the data. For example if I sent this JSON data: { "name": "John", "id": 345 } I would like to filter the data based on id and send it to a subfolder of my s3 bucket like: S3://myS3Bucket/345_2018_03_05. Is this at all possible with Kinesis Firehose or AWS Lambda? The only way I can think of right now is to resort to creating a kinesis stream for every single one of my possible IDs and point them to the

`EMR service role is invalid` when Creating EMR Cluster

主宰稳场 提交于 2020-01-13 08:17:10
问题 After creating the Amazon S3 Bucket, my_bucket , I created an Elastic Map Reduce cluster via the cli: aws emr create-cluster --name "Hive testing" --ami-version 3.3 --applications Name=Hive --use-default-roles --instance-type m3.xlarge --instance-count 3 --steps Type=Hive,Name="Hive Program",Args=[-d,INPUT=s3://my_bucket/input,-d.OUTPUT=s3://my_bucket/input,-d-LIBS=s3://my_bucket/serde_libs] Note that I did not specify a hive *.q file. After making the S3 and EMR Cluster, I will log onto the

How do I read a csv stored in S3 with csv.DictReader?

别等时光非礼了梦想. 提交于 2020-01-13 08:09:36
问题 I have code that fetches an AWS S3 object. How do I read this StreamingBody with Python's csv.DictReader? import boto3, csv session = boto3.session.Session(aws_access_key_id=<>, aws_secret_access_key=<>, region_name=<>) s3_resource = session.resource('s3') s3_object = s3_resource.Object(<bucket>, <key>) streaming_body = s3_object.get()['Body'] #csv.DictReader(???) 回答1: The code would be something like this: import boto3 import csv # get a handle on s3 s3 = boto3.resource(u's3') # get a handle

How do I read a csv stored in S3 with csv.DictReader?

淺唱寂寞╮ 提交于 2020-01-13 08:07:29
问题 I have code that fetches an AWS S3 object. How do I read this StreamingBody with Python's csv.DictReader? import boto3, csv session = boto3.session.Session(aws_access_key_id=<>, aws_secret_access_key=<>, region_name=<>) s3_resource = session.resource('s3') s3_object = s3_resource.Object(<bucket>, <key>) streaming_body = s3_object.get()['Body'] #csv.DictReader(???) 回答1: The code would be something like this: import boto3 import csv # get a handle on s3 s3 = boto3.resource(u's3') # get a handle

How do I read a csv stored in S3 with csv.DictReader?

烈酒焚心 提交于 2020-01-13 08:07:09
问题 I have code that fetches an AWS S3 object. How do I read this StreamingBody with Python's csv.DictReader? import boto3, csv session = boto3.session.Session(aws_access_key_id=<>, aws_secret_access_key=<>, region_name=<>) s3_resource = session.resource('s3') s3_object = s3_resource.Object(<bucket>, <key>) streaming_body = s3_object.get()['Body'] #csv.DictReader(???) 回答1: The code would be something like this: import boto3 import csv # get a handle on s3 s3 = boto3.resource(u's3') # get a handle

Boto s3 get_metadata

て烟熏妆下的殇ゞ 提交于 2020-01-13 08:04:51
问题 Trying to get meta_data that i have set on all my items in an s3 bucket. Which can be seen in the screenshot and below is the code I'm using. The two get_metadata calls return None. Any idea's boto.Version '2.5.2' amazon_connection = S3Connection(ec2_key, ec2_secret) bucket = amazon_connection.get_bucket('test') for key in bucket.list(): print " Key %s " % (key) print key.get_metadata("company") print key.get_metadata("x-amz-meta-company") 回答1: bucket.list() does not return metadata. try this

Upload Files directly to S3 chunk-by-chunk using Play Scala using Iteratees

雨燕双飞 提交于 2020-01-13 06:05:13
问题 I have tried in vain to upload files directly to s3 using Iteratees. I am still new to functional programming, and finding it hard to piece together some working code. I have written an iteratee which process chunks of the uploaded file and sends them to S3. The upload fails at the end with an error. Please help me fix this. Below is the code I came up with Controller Handler def uploadFile = Action.async(BodyParser(rh => S3UploadHelper("bucket-name").s3Iteratee() )) { implicit request =>

Need to upload file on S3 in Java

一个人想着一个人 提交于 2020-01-13 05:39:31
问题 I have started working on AWS recently. I am currently working on developing upload functionality to S3 storage. As per my understanding there could be 2 ways to upload a file to S3:- Client's file gets uploaded to my server and i upload this file to S3 server using my credentials. [i will also able to hide this from client as i will not be showing the upload details.] Upload directly to S3 I was able to implement the first approach using simple upload api , but i want to skip the " write