amazon-s3

Streaming S3 object to VertX Http Server Response

孤人 提交于 2021-02-06 13:59:30
问题 The title basically explains itself. I have a REST endpoint with VertX . Upon hitting it, I have some logic which results in an AWS-S3 object. My previous logic was not to upload to S3 , but to save it locally. So, I can do this at the response routerCxt.response().sendFile(file_path...) . Now that the file is in S3, I have to download it locally before I could call the above code. That is slow and inefficient. I would like to stream S3 object directly to the response object. In Express , it

Streaming S3 object to VertX Http Server Response

ⅰ亾dé卋堺 提交于 2021-02-06 13:55:36
问题 The title basically explains itself. I have a REST endpoint with VertX . Upon hitting it, I have some logic which results in an AWS-S3 object. My previous logic was not to upload to S3 , but to save it locally. So, I can do this at the response routerCxt.response().sendFile(file_path...) . Now that the file is in S3, I have to download it locally before I could call the above code. That is slow and inefficient. I would like to stream S3 object directly to the response object. In Express , it

Amazon S3 lifecycle retroactive application

只谈情不闲聊 提交于 2021-02-06 09:35:06
问题 Fairly straightforward question. Do Amazon S3 lifecycle rules that I set get applied to data retroactively? If so, what sort of delay might I see before older data begins to be archived or deleted? By way of example, let's say I have a bucket with 3 years of backed up data. If I create a new lifecycle where that data will be archived after 31 days, and deleted after 365 days, will that new rule be applied to the existing data? How soon will it begin to be enforced? 回答1: Yes it's retroactive

Restrict S3 object access to requests from a specific domain

痴心易碎 提交于 2021-02-06 09:22:09
问题 I have video files in S3 and a simple player that loads the files via an src attribute. I want the videos to only be viewed through my site and not directly via the S3 URL (which might be visible in the source code of the page or accessible via right clicking) Looking through the AWS docs it seems the only way i can do this via HTTP is to append a signature and expiration date to a query but this isn't sufficient. Other access restrictions refer to AWS users. How do i get around this or

Spark History Server on S3A FileSystem: ClassNotFoundException

半城伤御伤魂 提交于 2021-02-06 09:18:37
问题 Spark can use Hadoop S3A file system org.apache.hadoop.fs.s3a.S3AFileSystem . By adding the following into the conf/spark-defaults.conf , I can get spark-shell to log to the S3 bucket: spark.jars.packages net.java.dev.jets3t:jets3t:0.9.0,com.google.guava:guava:16.0.1,com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem spark.eventLog.enabled true spark.eventLog.dir s3a://spark-logs-test/ spark.history.fs

Can JavaScript detect if the user's browser supports gzip?

瘦欲@ 提交于 2021-02-06 09:01:07
问题 Can I use JavaScript to detect if the user's browser supports gzipped content (client side, not node.js or similar)? I am trying to support the following edge case: There are a lot of possible files that can load on a particular web app and it would be better to load them on demand as necessary as the application runs rather than load them all initially. I want to serve these files off of S3 with a far-future cache expiration date. Since S3 does not support gzipping files to clients that

AWS IAM - Can you use multiple wildcards (*) in a value

我的梦境 提交于 2021-02-06 07:31:42
问题 In all of the IAM Policy examples, they mention using wildcards ( * ) as placeholders for "stuff". However, the examples always use them at the end, and/or only demonstrate with one wildcard (e.g. to list everything in folder "xyz" with .../xyz/* ). I can't find anything definitive regarding the use of multiple wildcards, for example to match anything in subfolders across multiple buckets: arn:aws:s3:::mynamespace-property*/logs/* to allow something to see any log files across a "production"

AWS IAM - Can you use multiple wildcards (*) in a value

会有一股神秘感。 提交于 2021-02-06 07:26:55
问题 In all of the IAM Policy examples, they mention using wildcards ( * ) as placeholders for "stuff". However, the examples always use them at the end, and/or only demonstrate with one wildcard (e.g. to list everything in folder "xyz" with .../xyz/* ). I can't find anything definitive regarding the use of multiple wildcards, for example to match anything in subfolders across multiple buckets: arn:aws:s3:::mynamespace-property*/logs/* to allow something to see any log files across a "production"

AWS IAM - Can you use multiple wildcards (*) in a value

淺唱寂寞╮ 提交于 2021-02-06 07:24:26
问题 In all of the IAM Policy examples, they mention using wildcards ( * ) as placeholders for "stuff". However, the examples always use them at the end, and/or only demonstrate with one wildcard (e.g. to list everything in folder "xyz" with .../xyz/* ). I can't find anything definitive regarding the use of multiple wildcards, for example to match anything in subfolders across multiple buckets: arn:aws:s3:::mynamespace-property*/logs/* to allow something to see any log files across a "production"

Uploading formData file from client to S3 using createPresignedPost or getPresignedUrl fails due to CORS

跟風遠走 提交于 2021-02-05 09:24:26
问题 I'm using a React web app and trying to upload a file to AWS S3. I've tried everything both locally ( localhost:3000 ) and when deployed to production (Vercel serverless functions). I first use a fetch to retrieve a presigned url for the file upload, which works perfectly: module.exports = async (req, res) => { let {fileName, fileType} = req.body; const post = await s3.createPresignedPost({ Bucket: BUCKET_NAME, Fields: { key: fileName }, ContentType: fileType, Expires: 60 }); res.status(200)