amazon-s3

Add Dynamic Content Disposition for file names(amazon S3) in python

江枫思渺然 提交于 2021-02-07 20:30:17
问题 I have a Django model that saves filename as "uuid4().pdf". Where uuid4 generates a random uuid for each instance created. This file name is also stored on the amazon s3 server with the same name. I am trying to add a custom disposition for filename that i upload to amazon s3, this is because i want to see a custom name whenever i download the file not the uuid one. At the same time, i want the files to stored on s3 with the uuid filename. So, I am using django-storages with python 2.7. I

Add Dynamic Content Disposition for file names(amazon S3) in python

走远了吗. 提交于 2021-02-07 20:26:13
问题 I have a Django model that saves filename as "uuid4().pdf". Where uuid4 generates a random uuid for each instance created. This file name is also stored on the amazon s3 server with the same name. I am trying to add a custom disposition for filename that i upload to amazon s3, this is because i want to see a custom name whenever i download the file not the uuid one. At the same time, i want the files to stored on s3 with the uuid filename. So, I am using django-storages with python 2.7. I

Bulk Generate Pre-Signed URLs boto3

百般思念 提交于 2021-02-07 19:51:00
问题 I am currently using the following to create a pre-signed url for a bucket resource: bucket_name = ... key = ... s3_client = ... s3_client.generate_presigned_url( ClientMethod="get_object", Params={ "Bucket": bucket_name, "Key": key }, ExpiresIn=100 ) This works fine. However, I was wondering if it was possible to generate pre-signed urls for multiple keys in one request? Or is it required to make one request for each key? I didn't find anything useful in the docs regarding this topic. I'm

How to read all files in S3 folder/bucket using sparklyr in R?

只谈情不闲聊 提交于 2021-02-07 17:24:30
问题 I have tried below code & its combinations in order to read all files given in a S3 folder , but nothing seems to be working .. Sensitive information/code is removed from the below script. There are 6 files each with 6.5 GB . #Spark Connection sc<-spark_connect(master = "local" , config=config) rd_1<-spark_read_csv(sc,name = "Retail_1",path = "s3a://mybucket/xyzabc/Retail_Industry/*/*",header = F,delimiter = "|") # This is the S3 bucket/folder for files [One of the file names Industry_Raw

About permission in S3 file transfer

半城伤御伤魂 提交于 2021-02-07 15:01:09
问题 I'm using S3TransferManager-Sample to do testing. I created the Cognito and setup the IAM and change the constants.swift file at last.I have no problem to upload but failed to download. The error message is: download failed: [Error Domain=com.amazonaws.AWSS3ErrorDomain Code=1 "The operation couldn’t be completed. (com.amazonaws.AWSS3ErrorDomain error 1.)" UserInfo=0x7f8cd658a5a0 {HostId=d4yLouhlYmGn4s1Zp54+EOsZQEy2bVEGNs5XIa8pMxerJggANV/9Zb82c1QtF/5Hsn5KqYXGqdw=, Message=Access Denied, Code

boto3 - AWS lambda -copy files between buckets

末鹿安然 提交于 2021-02-07 14:19:02
问题 I am trying to copy multiple files in a source bucket to a destination bucket using AWS lambda and am getting the error below. Bucket structures are as follows Source Buckets mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_FULL_20170926_0.csv.gz mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_FULL_20170926_1.csv.gz mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_count_20170926.inf Destination Buckets mydestbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN

boto3 - AWS lambda -copy files between buckets

点点圈 提交于 2021-02-07 14:17:12
问题 I am trying to copy multiple files in a source bucket to a destination bucket using AWS lambda and am getting the error below. Bucket structures are as follows Source Buckets mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_FULL_20170926_0.csv.gz mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_FULL_20170926_1.csv.gz mysrcbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN_XREF_count_20170926.inf Destination Buckets mydestbucket/Input/daily/acctno_pin_xref/ABC_ACCTNO_PIN

Reading pretty print json files in Apache Spark

前提是你 提交于 2021-02-07 13:50:22
问题 I have a lot of json files in my S3 bucket and I want to be able to read them and query those files. The problem is they are pretty printed. One json file has just one massive dictionary but it's not in one line. As per this thread, a dictionary in the json file should be in one line which is a limitation of Apache Spark. I don't have it structured that way. My JSON schema looks like this - { "dataset": [ { "key1": [ { "range": "range1", "value": 0.0 }, { "range": "range2", "value": 0.23 } ]

How S3 select pricing works? What is data returned and scanned in s3 select means

左心房为你撑大大i 提交于 2021-02-07 13:40:38
问题 I have a 1M rows of CSV data. select 10 rows, Will I be billed for 10 rows. What is data returned and data scanned means in S3 Select? There is less documentation on these terms of S3 select 回答1: To keep things simple lets forget for some time that S3 reads in a columnar way. Suppose you have the following data: | City | Last Updated Date | |------------|---------------------| | London | 1st Jan | | London | 2nd Jan | | New Delhi | 2nd Jan | A query for fetching the latest update date forces

Sending file direct from browser to S3 but changing file name

∥☆過路亽.° 提交于 2021-02-07 13:18:35
问题 I am using signed authorized S3 uploads so that users can upload files directly from their browser to S3 bypassing my server. This presently works, but the file name is the same as on the user's machine. I'd like to save it on S3 as a different name. The formdata I post to amazon looks like this: var formData = new FormData(); formData.append('key', targetPath); // e.g. /path/inside/bucket/myFile.mov formData.append('AWSAccessKeyId', s3Auth.AWSAccessKeyId); // aws public key formData.append(