amazon-s3

Does AWS SDK for Java communicate in a secure channel with S3 servers?

我是研究僧i 提交于 2020-12-06 06:50:26
问题 I would like to think that it's a big YES, but I prefer to ask before to suppose. So, do you know if the AWS SDK for Java always uses a secure channel when I download/upload files from/to S3 buckets? Or this is something that should be configured when I write the code or into the S3 buckets itself? 回答1: Amazon S3 end points support both HTTP and HTTPS (http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) when you're using the Java SDK you will create an AmazonS3Client and if you

How do I get Zlib to uncompress from S3 stream in Ruby?

心已入冬 提交于 2020-12-05 11:11:32
问题 Ruby Zlib::GzipReader should be created passing an IO-like object (must have a read method that behaves same as the IO#read ). My problem is that I can't get this IO-like object from AWS::S3 lib. As far as I know, the only way of having a stream from it is passing a block to S3Object#stream . I already tried: Zlib::GzipReader.new(AWS::S3::S3Object.stream('file', 'bucket')) # Wich gaves me error: undefined method `read' for #<AWS::S3::S3Object::Value:0x000000017cbe78> Does anybody know how can

How do I get Zlib to uncompress from S3 stream in Ruby?

喜欢而已 提交于 2020-12-05 11:09:42
问题 Ruby Zlib::GzipReader should be created passing an IO-like object (must have a read method that behaves same as the IO#read ). My problem is that I can't get this IO-like object from AWS::S3 lib. As far as I know, the only way of having a stream from it is passing a block to S3Object#stream . I already tried: Zlib::GzipReader.new(AWS::S3::S3Object.stream('file', 'bucket')) # Wich gaves me error: undefined method `read' for #<AWS::S3::S3Object::Value:0x000000017cbe78> Does anybody know how can

dropzone.js direct upload to S3 with content-type

无人久伴 提交于 2020-12-05 03:17:53
问题 I'm currently using dropzone.js to upload images to S3 with a presigned URL. Everything works except I am unable to set the content-type of the file being uploaded. By default they are all being uploaded with binary/octet-stream and I am unable to view them directly in the browser. My S3 presigned policy looks like this: const policy = s3PolicyV4.generate({ key: key, bucket: process.env.S3_BUCKET, contentType: 'multipart/form-data', region: process.env.REGION, accessKey: process.env.ACCESS

How to get size of all files in an S3 bucket with versioning?

筅森魡賤 提交于 2020-12-03 07:45:16
问题 I know this command can provide the size of all files in a bucket: aws s3 ls mybucket --recursive --summarize --human-readable But this does not account for versioning. If I run this command: aws s3 ls s3://mybucket/myfile --human-readable It will show something like "100 MiB" but it may have 10 versions of this file which will be more like "1 GiB" total. The closest I have is getting the sizes of every version of a given file: aws s3api list-object-versions --bucket mybucket --prefix "myfile

How to get size of all files in an S3 bucket with versioning?

大憨熊 提交于 2020-12-03 07:44:28
问题 I know this command can provide the size of all files in a bucket: aws s3 ls mybucket --recursive --summarize --human-readable But this does not account for versioning. If I run this command: aws s3 ls s3://mybucket/myfile --human-readable It will show something like "100 MiB" but it may have 10 versions of this file which will be more like "1 GiB" total. The closest I have is getting the sizes of every version of a given file: aws s3api list-object-versions --bucket mybucket --prefix "myfile

Store Excel file exported from Pandas in AWS

人盡茶涼 提交于 2020-12-03 06:49:03
问题 I'm making a small website using Flask, with a SQLite database. One of the things I want to do is take some data (from the database) and export it as an Excel file. I want to offer an option of downloading that Excel file. One option to do this is to use Pandas to write to an Excel file which would be stored on the web server, and to use Flask's send_file to offer the download option. However, is it possible to provide a downloadable Excel file without storing the file "locally" on the server

Store Excel file exported from Pandas in AWS

笑着哭i 提交于 2020-12-03 06:44:10
问题 I'm making a small website using Flask, with a SQLite database. One of the things I want to do is take some data (from the database) and export it as an Excel file. I want to offer an option of downloading that Excel file. One option to do this is to use Pandas to write to an Excel file which would be stored on the web server, and to use Flask's send_file to offer the download option. However, is it possible to provide a downloadable Excel file without storing the file "locally" on the server

Store Excel file exported from Pandas in AWS

别说谁变了你拦得住时间么 提交于 2020-12-03 06:43:20
问题 I'm making a small website using Flask, with a SQLite database. One of the things I want to do is take some data (from the database) and export it as an Excel file. I want to offer an option of downloading that Excel file. One option to do this is to use Pandas to write to an Excel file which would be stored on the web server, and to use Flask's send_file to offer the download option. However, is it possible to provide a downloadable Excel file without storing the file "locally" on the server

Custom domain for s3 bucket?

≯℡__Kan透↙ 提交于 2020-12-02 17:31:40
问题 I have S3 bucket called "mybucket". Files from there are available under following links: mybucket.s3.amazonaws.com/path/to/file.jpg s3.amazonaws.com/mybucket/path/to/file.jpg I need custom domain for files served from s3. I added DNS CNAME record pointing to from images.mydomain.com to s3.amazonaws.com (also tried images.mydomain.com -> mybucket.s3.amazonaws.com ). In both cases when I try to GET images.mydomain.com/mybucket/path/to/file/jpg (or images.mydomain.com/path/to/file.jpg ) I get