amazon-s3

Credentials can't be located for S3 Flask app in Heroku

人走茶凉 提交于 2020-06-18 10:45:25
问题 My flask app works locally with AWS S3 bucket, but when I try to get it to work in Heroku, I keep getting this error 2020-06-07T00:58:29.174989+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/botocore/signers.py", line 160, in sign 2020-06-07T00:58:29.174989+00:00 app[web.1]: auth.add_auth(request) 2020-06-07T00:58:29.174989+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/botocore/auth.py", line 357, in add_auth 2020-06-07T00:58:29.174989+00:00

Mocking two S3 API calls in the same AWS Lambda function using Jest

你。 提交于 2020-06-17 14:10:49
问题 I'm trying to mock two S3 calls in the same Lambda function but seem to be getting Access Denied errors so this leads me to believe the S3 calls are not being mocked. The syntax I'm currently using works when mocking just one S3 call in the function but the function I'm currently testing has two S3 calls(deleteObject and putObject). Here is my mock code: const putObjectMock = jest.fn(() => ({ promise: jest.fn(), })); const deleteObjectMock = jest.fn(() => ({ promise: jest.fn(), }))); jest

Can a client set file name and extension programmatically when he PUTs file content to a presigned S3 URL that the service vends out?

核能气质少年 提交于 2020-06-17 13:27:17
问题 Here is the starter code I'm using from the documentation. I'm trying to create a service that vends out Presigned S3 URLs. I use the default settings of GeneratePresignedUrlRequest as below. import com.amazonaws.AmazonServiceException; import com.amazonaws.HttpMethod; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3

Can't delete S3 buckets - Error Data not found

 ̄綄美尐妖づ 提交于 2020-06-17 13:20:48
问题 I can't get rid of five buckets in S3. Every screen in the AWS console says "Error Data not found" (i.e. Overview, Properties, Permissions, Management, Access points). I can't set lifecycle rules to delete objects, but the buckets never had anything in them and versioning was never enabled anyway. I've also tried forcing it in my terminal... aws s3 rb s3://bucketblah --force ...but it fails and I get remove_bucket failed: Unable to delete all objects in the bucket, bucket will not be deleted.

Can't delete S3 buckets - Error Data not found

痴心易碎 提交于 2020-06-17 13:19:03
问题 I can't get rid of five buckets in S3. Every screen in the AWS console says "Error Data not found" (i.e. Overview, Properties, Permissions, Management, Access points). I can't set lifecycle rules to delete objects, but the buckets never had anything in them and versioning was never enabled anyway. I've also tried forcing it in my terminal... aws s3 rb s3://bucketblah --force ...but it fails and I get remove_bucket failed: Unable to delete all objects in the bucket, bucket will not be deleted.

How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces

本秂侑毒 提交于 2020-06-17 10:28:23
问题 How do you configure TensorFlow Serving to use files stored in DigitalOcean Spaces? It's important that the solution: provides access to both the configuration and model files provides non-public access to the data I have configured a bucket named your_bucket_name in DigitalOcean Spaces with the following structure: - your_bucket_name - config - batching_parameters.txt - monitoring_config.txt - models.config - models - model_1 - version_1.1 - variables - variables.data-00000-of-00001 -

How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces

爱⌒轻易说出口 提交于 2020-06-17 10:28:21
问题 How do you configure TensorFlow Serving to use files stored in DigitalOcean Spaces? It's important that the solution: provides access to both the configuration and model files provides non-public access to the data I have configured a bucket named your_bucket_name in DigitalOcean Spaces with the following structure: - your_bucket_name - config - batching_parameters.txt - monitoring_config.txt - models.config - models - model_1 - version_1.1 - variables - variables.data-00000-of-00001 -

Unsupported body payload object when trying to upload to Amazon S3 using stream.PassThrough

女生的网名这么多〃 提交于 2020-06-17 09:36:21
问题 After not finding any working solution to this problem for me, I am pasting my Angular Electron app code. Component const pipeline = this.syncService.uploadSong(localPath, filename); pipeline.on('close', (data) => { // upload finished }) pipeline.on('error', (err) => { console.error(err.toString()); }) And the service is : uploadSong(localPath: string, filename: string) { const {writeStream, promise} = this.uploadStream(filename); const readStream = fs.createReadStream(localPath); return

AWS CloudFront misinterprets routing rule and redirects resource back to S3 bucket object URL

依然范特西╮ 提交于 2020-06-17 02:52:07
问题 I have an AWS S3 bucket; let's call it example.com . I have the bucket configured for static web site hosting; the bucket static site URL for example might be http://example.com.s3-website-us-west-1.amazonaws.com . I also have a CloudFront distribution with an AWS-managed certificate, so that when I access https://example.com/ for example it serves content out of the S3 bucket http://example.com.s3-website-us-west-1.amazonaws.com/ . That all works like a dream. On my site I have a file https:

AWS S3 buffer size not increasing

廉价感情. 提交于 2020-06-17 02:10:15
问题 I am creating an S3 client with a modified buffer size, however, it does not seem to make a difference as always the same amount of bytes is read from the stream. Example code: var s3Client = new AmazonS3Client(access, secret, token, new AmazonS3Config { RegionEndpoint = Amazon.RegionEndpoint.USEast1, BufferSize = 1000000, // 1 MB (completely arbitrary) }); await s3Client.PutObjectAsync(new PutObjectRequest { Key = fileName, Bucket = bucketName, InputStream = new MyCustomStream(...) }); When