amazon-s3

S3ToSFTP: Move multiple files from same S3 key to SFTP path

时光总嘲笑我的痴心妄想 提交于 2020-08-06 06:40:49
问题 Requirement : Move multiple files from the same S3 Key to SFTP Below is the part of the code, I was able to achieve moving one file into SFTP location. If the s3_key location has more than 1 file example as below, I need to get both files from /path/output to SFTP Location /path/output/abc.csv /path/output/def.csv Tried: But both files are not posted Tried passing s3_key as '/path/output/*.csv' Code with sftp.open(sftp_path + key_name, 'wb') as f: s3_client.download_fileobj(s3_bucket,s3_key,

How to access AWS CloudFront that connected with S3 Bucket via Bearer token of a specific user (JWT Custom Auth)

做~自己de王妃 提交于 2020-08-05 09:53:07
问题 I am using a serverless framework to deploy a serverless stack to AWS. My stack consists of some lambda functions, DynamoDB tables and API Gateway. I am protected The API Gateway using what's called lambda authorizer. Also, I have a custom standalone self-hosted Auth service that can generate tokens. So the scenario is that the user can request a token from this service (It's IdentityServer4 hosted on Azure) then the user can send a request to the API Gateway with the bearer token so the API

Stream large string to S3 using boto3

天涯浪子 提交于 2020-08-05 05:12:08
问题 I am downloading files from S3, transforming the data inside them, and then creating a new file to upload to S3. The files I am downloading are less than 2GB but because I am enhancing the data, when I go to upload it, it is quite large (200gb+). Currently you could imagine by code is like: files = list_files_in_s3() new_file = open('new_file','w') for file in files: file_data = fetch_object_from_s3(file) str_out = '' for data in file_data: str_out += transform_data(data) new_file.write(str

API Gateway GET / PUT large files into S3

橙三吉。 提交于 2020-08-04 05:19:42
问题 Following this AWS documentation, I was able to create a new endpoint on my API Gateway that is able to manipulate files on an S3 repository. The problem I'm having is the file size (AWS having a payload limitation of 10MB). I was wondering, without using a lambda work-around (this link would help with that), would it be possible to upload and get files bigger than 10MB (even as binary if needed) seeing as this is using an S3 service as a proxy - or is the limit regardless? I've tried PUT

API Gateway GET / PUT large files into S3

戏子无情 提交于 2020-08-04 05:19:20
问题 Following this AWS documentation, I was able to create a new endpoint on my API Gateway that is able to manipulate files on an S3 repository. The problem I'm having is the file size (AWS having a payload limitation of 10MB). I was wondering, without using a lambda work-around (this link would help with that), would it be possible to upload and get files bigger than 10MB (even as binary if needed) seeing as this is using an S3 service as a proxy - or is the limit regardless? I've tried PUT

Hosting multiple SPA web apps on S3 + Cloudfront under same URL

末鹿安然 提交于 2020-08-04 04:26:07
问题 I have two static web apps (create-react-apps) that are currently in two separate S3 buckets. Both buckets are configured for public read + static web hosting, and visiting their S3 hosted URLs correctly display the sites. Bucket 1 - First App: index.html static/js/main.js Bucket 2 - Second App: /secondapp/ index.html static/js/main.js I have setup a single Cloudfront for this - The default cloudfront origin loads FirstApp correctly, such that www.mywebsite.com loads the index.html by default

Force S3 PDF to be viewed in browser instead of download

穿精又带淫゛_ 提交于 2020-08-01 16:41:49
问题 So you can force a download by using Content-Disposition: attachment Content-Disposition: inline is the default and should display in the browser, and it does in fact work with most files like PNG, JPG, etc. But for some reason somehow when generating a presigned URL from S3, PDF files will always force download even if I don't use the content-disposition: attachment header. I want to be able to make the PDF open in the browser when the browser allows it I am using the presigned URL generate

Force S3 PDF to be viewed in browser instead of download

一个人想着一个人 提交于 2020-08-01 16:39:51
问题 So you can force a download by using Content-Disposition: attachment Content-Disposition: inline is the default and should display in the browser, and it does in fact work with most files like PNG, JPG, etc. But for some reason somehow when generating a presigned URL from S3, PDF files will always force download even if I don't use the content-disposition: attachment header. I want to be able to make the PDF open in the browser when the browser allows it I am using the presigned URL generate

Amazon S3: Can clients see the file before upload is complete

拈花ヽ惹草 提交于 2020-08-01 10:51:08
问题 At http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html, I found the following: Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket. But that's talking about me receiving a success response. Am I guaranteed that no other client will see the object when listing objects in the bucket -- until the entire object is uploaded? I want to use S3 as a "spool" directory -- I'll upload files there, and another client will

How to set InputStream content Length

。_饼干妹妹 提交于 2020-08-01 09:24:11
问题 I am uploading files to Amazon S3 bucket. The files are being uploaded but i get the following Warning. WARNING: No content length specified for stream data. Stream contents will be buffered in memory and could result in out of memory errors. So I added the following line to my code metaData.setContentLength(IOUtils.toByteArray(input).length); but then i got the following message. I don't even know if it is a warning or what. Data read has a different length than the expected: dataLength=0;