amazon-s3

Apache Ozone + AWS S3 .Net API: PutObject is creating a bucket instead of a key

安稳与你 提交于 2020-05-16 02:27:43
问题 I am trying to create keys in apache OZone using AWS S3 API for .NET. The key I am trying to create must be inside a bucket called "test" that I created using AWS S3 CLI. My code: static async Task WriteFile() { AmazonS3Config config = new AmazonS3Config(); config.ServiceURL = "http://myApacheOzoneEndpoint:8744"; // This port is mapped from a docker container to (not the original endpoint port for Ozone) AWSCredentials credentials = new BasicAWSCredentials("testuser/scm@EXAMPLE.COM",

TensorBoard without callbacks for Keras docker image in SageMaker

拥有回忆 提交于 2020-05-15 21:19:15
问题 I'm trying to add TensorBoard functionality to this SageMaker example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/keras_bring_your_own/hpo_bring_your_own_keras_container.ipynb The issue is that SageMaker's Estimator.fit() does not seem to support Keras models compiled with callbacks. Now from this git issue post it was described that what I need to do for TensorBoard functionality is, "You need your code inside the container to save checkpoints to

Redirecting AWS API Gateway to S3 Binary

别说谁变了你拦得住时间么 提交于 2020-05-15 04:29:08
问题 I'm trying to download large binaries from S3 via an API Gateway URL. Because the maximum download size in API Gateway is limited I thought I just could provide the basic URL to Amazon S3 (in the swagger file) and add the folder/item to the binary I want to download. But all I find is redirection API Gateway via a Lambda function, but I don't want that. I want a swagger file where the redirect is already configured. So if I call <api_url>/folder/item I want to be redirected to s3-url/folder

Split S3 file into smaller files of 1000 lines

一曲冷凌霜 提交于 2020-05-15 01:59:11
问题 I have a text file on S3 with around 300 million lines. I'm looking to split this file into smaller files of 1,000 lines each (with the last file containing the remainder), which I'd then like to put into another folder or bucket on S3. So far, I've been running this on my local drive using the linux command: split -l 1000 file which splits the original file into smaller files of 1,000 lines. However, with a larger file like this, it seems inefficient to download and then re-upload from my

upload file from url to s3 bucket

走远了吗. 提交于 2020-05-14 10:43:48
问题 I have a nodejs program running in Heroku which gives me the URL of files. These files need to be stored in an s3 bucket. It is my understanding that there is no way to upload a file from a url directly to an s3 bucket. How would you suggest I get the files from the URL to the s3 bucket? I've seen talk of using an EC2 instance but would like to avoid that if possible. Is there anyway to do this using just heroku and S3? 回答1: I understand this is tagged node-js , but if someone needs this for

Serve index file instead of download prompt

点点圈 提交于 2020-05-14 01:19:59
问题 I have my website hosted on S3 with CloudFront as a CDN, and I need these two URLs to behave the same and to serve the index.html file within the directory: example.com/directory example.com/directory/ The one with the / at the end incorrectly prompts the browser to download a zero byte file with a random hash for the name of the file. Without the slash it returns my 404 page. How can I get both paths to deliver the index.html file within the directory? If there's a way I'm "supposed" to do

Serve index file instead of download prompt

会有一股神秘感。 提交于 2020-05-14 01:18:27
问题 I have my website hosted on S3 with CloudFront as a CDN, and I need these two URLs to behave the same and to serve the index.html file within the directory: example.com/directory example.com/directory/ The one with the / at the end incorrectly prompts the browser to download a zero byte file with a random hash for the name of the file. Without the slash it returns my 404 page. How can I get both paths to deliver the index.html file within the directory? If there's a way I'm "supposed" to do

PHP Amazon S3 Uploads And Tags

不羁岁月 提交于 2020-05-13 19:20:47
问题 I'm coding a video sharing site. I'm using S3 to store and serve up the videos. I've coded tags for the videos in my MySQL database but I saw that S3 supports settings tags on uploaded files. Here's the code I'm using to upload files: try { //Create a S3Client $s3Client = new S3Client([ 'region' => 'us-east-1', 'version' => 'latest', 'credentials' => [ 'key' => '', 'secret' => '' ] ]); $result = $s3Client->putObject([ 'Bucket' => $bucket, 'Key' => $assetFilename, 'SourceFile' => $fileTmpPath,

AWS Lambda: How to extract a tgz file in a S3 bucket and put it in another S3 bucket

坚强是说给别人听的谎言 提交于 2020-05-13 07:06:38
问题 I have an S3 bucket named "Source". Many '.tgz' files are being pushed into that bucket in real-time. I wrote an Java code for extracting the '.tgz' file and pushing it into "Destination" bucket. I pushed my code as Lambda function. I got the '.tgz' file as InputStream in my Java code. How to extract it in Lambda ? I'm not able to create a file in Lambda, it throws "FileNotFound(Permission Denied)" in JAVA. AmazonS3 s3Client = new AmazonS3Client(); S3Object s3Object = s3Client.getObject(new

How Can I Write Logs Directly to AWS S3 from Memory Without First Writing to stdout? (Python, boto3)

て烟熏妆下的殇ゞ 提交于 2020-05-13 05:16:58
问题 I'm trying to write Python log files directly to S3 without first saving them to stdout. I want the log files to be written to S3 automatically when the program is done running. I'd like to use the boto3 put_object method: import atexit import logging import boto3 def write_logs(body, bucket, key): s3 = boto3.client("s3") s3.put_object(Body=body, Bucket=bucket, Key=key) log = logging.getLogger("some_log_name") log.info("Hello S3") atexit.register(write_logs, body=log, bucket="bucket_name",