amazon-s3

AWS S3 Upload Access Denied through Presigned post generated with AWS-SDK PHP

江枫思渺然 提交于 2021-01-27 17:07:45
问题 i'm trying to upload a file (an image for my tests) to my s3 bucket with a pre-signed post generated with AWS SDK PHP. Firstable i generate the pre-signed post, then i manually create the request with given PostObjectV4 datas with Postman or via a simple html form... After filling everything, the request result in Access Denied :-(. The user associated with the client to generate the PostObjectV4 has Allowed s3:PutObject policy on the corresponding bucket. I've already tried to : Set my

AWS S3 Upload Access Denied through Presigned post generated with AWS-SDK PHP

丶灬走出姿态 提交于 2021-01-27 17:00:22
问题 i'm trying to upload a file (an image for my tests) to my s3 bucket with a pre-signed post generated with AWS SDK PHP. Firstable i generate the pre-signed post, then i manually create the request with given PostObjectV4 datas with Postman or via a simple html form... After filling everything, the request result in Access Denied :-(. The user associated with the client to generate the PostObjectV4 has Allowed s3:PutObject policy on the corresponding bucket. I've already tried to : Set my

AWS cross region replication to multiple regions?

爱⌒轻易说出口 提交于 2021-01-27 13:51:03
问题 I am trying to set up cross region replication so that my original file will be replicated to two different regions. Right now, I can only get it to replicate to one other region. For example, my files are on US Standard. When a file is uploaded it is replicated from US Standard to US West 2. I would also like for that file to be replicated to US West 1. Is there a way to do this? 回答1: It appears the Cross-Region Replication in Amazon S3 cannot be chained . Therefore, it cannot be used to

Is there any way to upload extracted zip file using “java.util.zip” to AWS-S3 using multipart upload (Java high level API)

笑着哭i 提交于 2021-01-27 13:13:51
问题 Need to upload a large file to AWS S3 using multipart-upload using stream instead of using /tmp of lambda.The file is uploaded but not uploading completely. In my case the size of each file in zip cannot be predicted, may be a file goes up to 1 Gib of size.So I used ZipInputStream to read from S3 and I want to upload it back to S3.Since I am working on lambda, I cannot save the file in /tmp of lambda due to the large file size.So I tried to read and upload directly to S3 without saving in

Adding zip file as Content in Web API response doubling file size on download

笑着哭i 提交于 2021-01-27 12:51:40
问题 I am saving zip files to an AWS S3 bucket. I am now trying to create a C# .NET API that will allow me to download a specified key from the bucket and save it to a HttpResponseMessage in the Content key. I've referred to the following question to set up my response for zip files: How to send a zip file from Web API 2 HttpGet I have modified the code in the previous question so that it instead reads from a TransferUtility stream. Problem is I am coming into an error when trying to extract or

Read h5 file using AWS S3 s3fs/boto3

*爱你&永不变心* 提交于 2021-01-27 07:06:49
问题 I am trying to read h5 file from AWS S3. I am getting the following errors using s3fs/boto3. Can you help? Thanks! import s3fs fs = s3fs.S3FileSystem(anon=False, key='key', secret='secret') with fs.open('file', mode='rb') as f: h5 = pd.read_hdf(f) TypeError: expected str, bytes or os.PathLike object, not S3File fs = s3fs.S3FileSystem(anon=False, key='key', secret='secret') with fs.open('file', mode='rb') as f: hf = h5py.File(f) TypeError: expected str, bytes or os.PathLike object, not S3File

How to concatenate S3 bucket name in Terraform variable and pass it to main tf file

痞子三分冷 提交于 2021-01-27 06:50:56
问题 I'm writing terraform templates to create two S3 buckets, however, my requirement is to concatenate their names in vars.tf and then pass it to main tf file. Below is the vars.tf and main s3.tf file. vars.tf: variable TENANT_NAME { default = "Mansing" } variable BUCKET_NAME { type = "list" default = ["bh.${var.TENANT_NAME}.o365.attachments", "bh.${var.TENANT_NAME}.o365.eml"] } s3.tf: resource "aws_s3_bucket" "b" { bucket = "${element(var.BUCKET_NAME, 2)}" acl = "private" } When do terraform

Why am I getting a 403 error when uploading to S3 from the browser?

我是研究僧i 提交于 2021-01-27 06:28:09
问题 So I've tried looking through previous answers on here and nothing seems to be working. I'm using Dropzone, which appears to make an OPTIONS request to get all the allowed CORS related information, but it doesn't seem to be returning properly So from looking in the Chrome dev tools, I have the following Request Headers Host: mybucket.s3.amazonaws.com Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Access-Control-Request-Method: POST Origin: http://localhost:9010 User-Agent:

How to get latest file-name or file from S3 bucket using event triggered lambda

人盡茶涼 提交于 2021-01-26 18:00:49
问题 I am very new to AWS services and has just a week worth of experience with serverless architecture, My requirement is to trigger an event when a new file is uploaded to a specific bucket, once the event trigger is set my Lambda should get the details of the latest file such as Name, Size, and Date of creation. The source is uploading this file in a new folder every time and names the folder with the current date. So far I am able to crack how to create my Lambda function and listen to the

Amazon S3 policy allowing only upload not overwrite [duplicate]

拥有回忆 提交于 2021-01-26 03:46:24
问题 This question already has answers here : Amazon S3 ACL for read-only and write-once access (4 answers) Closed 3 years ago . I'm developing a mobile application which will let anyone upload a file to an S3 bucket. I think I'll use the Anonymous Token Vending Machine that is provided by Amazon. However, I can't see how to write a TokenVendingMachinePolicy.json file that will only allow uploading new files, not overwriting (effectively deleting). I thought allowing just s3:PutObject would be