amazon-s3

Google doc viewer doesn't work with Amazon s3 signed urls

不问归期 提交于 2020-01-11 06:36:06
问题 I try to display a .doc file stored on a S3 bucket inside an iframe thanks to google doc viewer api. I already did some research and found this, which i tried to apply here : var encodedUrl = encodeURIComponent("http://myAPI.com/1d293950-67b2-11e7-8530-318c83fb9802/example.docx?X-Amz-Algorithm=AWS4-HMAC-SHA256%26X-Amz-Credential=GNRO0BLDYAJP1FU7ALIS%2F20170717%2Fus-east-1%2Fs3%2Faws4_request%26X-Amz-Date=20170717T145429Z%26X-Amz-Expires=600%26X-Amz-SignedHeaders=host%26X-Amz-Signature

If we want use S3 to host Python packages, how can we tell pip where to find the newest version?

若如初见. 提交于 2020-01-11 04:49:06
问题 we are trying to come up with a solution to have AWS S3 to host and distribute our Python packages. Basically what we want to do is using "python3 setup.py bdist_wheel" to create a wheel. Upload it to S3. Then any server or any machine can do "pip install $http://path/on/s3". (including a virtualenv in AWS lambda) (We've looked into Pypicloud and thought it's an overkill.) Creating package and installing from S3 work fine. There is only one issue here: we will release new code and give them

Can someone walk me through serving gzipped files from Cloudfront via S3 origin?

[亡魂溺海] 提交于 2020-01-11 04:04:12
问题 I've been through quite a few suggestions given on this topic from other posts on Stackoverflow, but I'm still not successfully getting it to work. The website origin is on S3, it is being served via Cloudfront. Going through the other posts and Amazon docs, I'm seeing suggestions such as: 1) Gzip the necessary files, remove the .gz from the files name, but on uploading, still set the meta to gzip. This isn't working for me. Safari just downloads the gzipped file(s) instead of serving as a

untarring files to S3 fails, not sure why

a 夏天 提交于 2020-01-11 04:03:09
问题 (new information below) I am trying to set up a lambda function that reacts to uploaded tgz files by uncompressing them and writing the results back to S3. The unzip and untar work fine, but uploading to S3 fails: /Users/russell/lambda/gzip/node_modules/aws-sdk/lib/s3/managed_upload.js:350 var buf = self.body.read(self.partSize - self.partBuffer.length) || ^ TypeError: undefined is not a function at ManagedUpload.fillStream (/Users/russell/lambda/gzip/node_modules/aws-sdk/lib/s3/managed

Structured streaming won't write DF to file sink citing /_spark_metadata/9.compact doesn't exist

自作多情 提交于 2020-01-10 23:29:21
问题 I'm building a Kafka ingest module in EMR 5.11.1, Spark 2.2.1. My intention is to use Structured Streaming to consume from a Kafka topic, do some processing, and store to EMRFS/S3 in parquet format. Console sink works as expected, file sink does not work. In spark-shell : val event = spark.readStream.format("kafka") .option("kafka.bootstrap.servers", <server list>) .option("subscribe", <topic>) .load() val eventdf = event.select($"value" cast "string" as "json") .select(from_json($"json",

Amazon S3: How to get a list of folders in the bucket?

て烟熏妆下的殇ゞ 提交于 2020-01-10 19:41:09
问题 All that I found, it's this method GET Bucket But I can't understand how can I get only a list of folders in the current folder. Which prefix and delimiter I need to use? Is that possible at all? 回答1: For the sake of example, assume I have a bucket in the USEast1 region called MyBucketName , with the following keys: temp/ temp/foobar.txt temp/txt/ temp/txt/test1.txt temp/txt/test2.txt temp2/ Working with folders can be confusing because S3 does not natively support a hierarchy structure --

How can I get query strings in my Amazon S3 static website?

…衆ロ難τιáo~ 提交于 2020-01-10 14:08:10
问题 I am hosting a static website on Amazon S3. Some of my client-side javascript parses the query strings to control the HTML. This works fine locally, but on the S3-hosted version, the query strings seem to get dropped from the request. My motivation for using query strings is that I want to be able to pass state between pages based on what the user did on the previous page. Is this approach possible? Did I violate the "static" requirement for S3 static websites? I can't seem to find any

Can I FTP data into AWS S3?

你说的曾经没有我的故事 提交于 2020-01-10 09:39:50
问题 Is it possible to upload content into S3 using a standard FTP client like FileZilla? I am unsure at the moment how best to get data uploaded in bulk. Thanks 回答1: S3 doesn't support ftp directly, but for the mac, you can use a tool like cyberduck.io and on windows cloudberry has a pretty complete set of tools (including some free ones): http://cyberduck.io/ http://www.cloudberrylab.com/ To the best of my knowledge, filezilla doesn't support s3, though I wouldn't be surprised if they do someday

Can I FTP data into AWS S3?

。_饼干妹妹 提交于 2020-01-10 09:39:10
问题 Is it possible to upload content into S3 using a standard FTP client like FileZilla? I am unsure at the moment how best to get data uploaded in bulk. Thanks 回答1: S3 doesn't support ftp directly, but for the mac, you can use a tool like cyberduck.io and on windows cloudberry has a pretty complete set of tools (including some free ones): http://cyberduck.io/ http://www.cloudberrylab.com/ To the best of my knowledge, filezilla doesn't support s3, though I wouldn't be surprised if they do someday

react router doesn't work in aws s3 bucket

爷,独闯天下 提交于 2020-01-10 07:48:09
问题 I deployed my React website build/ folder into an AWS S3 bucket. If I go to www.mywebsite.com , it works and if I click on a tag to go to Project and About pages, it leads me to the right page. However, if I copy and send the page url or go straight to the link like: www.mywebsite.com/projects , it returns 404. Here's my App.js code: const App = () => ( <Router> <div> <NavBar/> <Switch> <Route exact path="/" component={Home}/> <Route exact path="/projects" component={Projects}/> <Route exact