amazon-s3

Amazon S3 Compressing Files?

穿精又带淫゛_ 提交于 2020-12-14 06:55:53
问题 A few years ago I uploaded some photos to S3. When I try to retrieve them today, the files seem to be corrupted, as I am unable to open them in the browser or with a photo editor. Looking at the file properties, it seems the files have been compressed, as there is a x-amz-meta-compression-algorithm key with the value zlib and a x-amz-meta-compression-original-size with a value of 53890 . However, the size of the file on S3 is 53761 . I did not compress the files before uploading them. How can

Amazon S3 Compressing Files?

南笙酒味 提交于 2020-12-14 06:53:02
问题 A few years ago I uploaded some photos to S3. When I try to retrieve them today, the files seem to be corrupted, as I am unable to open them in the browser or with a photo editor. Looking at the file properties, it seems the files have been compressed, as there is a x-amz-meta-compression-algorithm key with the value zlib and a x-amz-meta-compression-original-size with a value of 53890 . However, the size of the file on S3 is 53761 . I did not compress the files before uploading them. How can

Amazon S3 Compressing Files?

流过昼夜 提交于 2020-12-14 06:51:39
问题 A few years ago I uploaded some photos to S3. When I try to retrieve them today, the files seem to be corrupted, as I am unable to open them in the browser or with a photo editor. Looking at the file properties, it seems the files have been compressed, as there is a x-amz-meta-compression-algorithm key with the value zlib and a x-amz-meta-compression-original-size with a value of 53890 . However, the size of the file on S3 is 53761 . I did not compress the files before uploading them. How can

Spark + EMRFS/S3 - Is there a way to read client side encrypted data and write it back using server side encryption?

元气小坏坏 提交于 2020-12-14 06:38:46
问题 I have a use-case in spark where I have to read data from a S3 that uses client-side encryption, process it and write it back using only server-side encryption. I'm wondering if there's a way to do this in spark? Currently, I have these options set: spark.hadoop.fs.s3.cse.enabled=true spark.hadoop.fs.s3.enableServerSideEncryption=true spark.hadoop.fs.s3.serverSideEncryption.kms.keyId=<kms id here> But obviously, it's ending up using both CSE and SSE while writing the data. So, I'm wondering

Spark + EMRFS/S3 - Is there a way to read client side encrypted data and write it back using server side encryption?

ε祈祈猫儿з 提交于 2020-12-14 06:36:56
问题 I have a use-case in spark where I have to read data from a S3 that uses client-side encryption, process it and write it back using only server-side encryption. I'm wondering if there's a way to do this in spark? Currently, I have these options set: spark.hadoop.fs.s3.cse.enabled=true spark.hadoop.fs.s3.enableServerSideEncryption=true spark.hadoop.fs.s3.serverSideEncryption.kms.keyId=<kms id here> But obviously, it's ending up using both CSE and SSE while writing the data. So, I'm wondering

Limiting 'ls' command output in s3fs

强颜欢笑 提交于 2020-12-13 09:36:24
问题 My Amazon S3 bucket has millions of files and I am mounting it using s3fs . Anytime a ls command is issued (not intentionally) the terminal hangs. Is there a way to limit the number of results returned to 100 when a ls command is issued in a s3fs mounted path? 回答1: Try goofys (https://github.com/kahing/goofys). It doesn't limit the number of item returned for ls, but ls is about 40x faster than s3fs when there are lots of files. 回答2: It is not recommended to use s3fs in production situations.

Limiting 'ls' command output in s3fs

北城以北 提交于 2020-12-13 09:36:11
问题 My Amazon S3 bucket has millions of files and I am mounting it using s3fs . Anytime a ls command is issued (not intentionally) the terminal hangs. Is there a way to limit the number of results returned to 100 when a ls command is issued in a s3fs mounted path? 回答1: Try goofys (https://github.com/kahing/goofys). It doesn't limit the number of item returned for ls, but ls is about 40x faster than s3fs when there are lots of files. 回答2: It is not recommended to use s3fs in production situations.

trigger lambda from a button in a static website in s3

邮差的信 提交于 2020-12-13 06:39:26
问题 I have this static website that has a form with a couple of fields. CloudFront is on front of the bucket routing traffic to the site. Them form in question naturally has a button that POST to '#'. Is there a way I could make the hits on the button trigger a lambda function with the content of the form's fields? Thanks in advance. 回答1: API Gateway is typically used to call the Lambda function from a Web Page. Here is a basic tutorial matching your architecture: https://aws.amazon.com/getting

trigger lambda from a button in a static website in s3

偶尔善良 提交于 2020-12-13 06:35:05
问题 I have this static website that has a form with a couple of fields. CloudFront is on front of the bucket routing traffic to the site. Them form in question naturally has a button that POST to '#'. Is there a way I could make the hits on the button trigger a lambda function with the content of the form's fields? Thanks in advance. 回答1: API Gateway is typically used to call the Lambda function from a Web Page. Here is a basic tutorial matching your architecture: https://aws.amazon.com/getting

Can I resume a download from aws s3?

戏子无情 提交于 2020-12-13 06:31:08
问题 I am using the python boto3 library to download files from s3 to an IOT device on a cellular connection which is often slow and shaky. Some files are quite large (250Mb, which for this scenario is large) and the network fails and the device reboots while downloading. I would like to resume the download from the place it ended when the device rebooted. Is there any way to do it? The aborted download does seem to keep downloaded data in a temporary file while downloading so the data is there.