amazon-s3

Preventing a user from even knowing about other users (folders) on AWS S3

北慕城南 提交于 2020-01-04 13:45:37
问题 I have a question about writing IAM policies on AWS S3 that was partially answered here, in this nice post by Jim Scharf: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/ Taking Jim's post as a starting point, what I am trying to achieve is preventing a user from even knowing about the existence of other users that have access to the same bucket while using S3's console. Jim's solution, as well as others I've found,

Will getObjectSummaries get the count of objects stored in a S3 Bucket?

删除回忆录丶 提交于 2020-01-04 09:28:01
问题 I need to know the number of files that are stored under S3 bucket. Currently, ObjectListing doesn't has a method such as count or numberOfObject . However, it has a method that will return a List of S3ObjectSummary . public java.util.List<S3ObjectSummary> getObjectSummaries() Since it is a List , I can call size() method but is it accurate and right thing to assume that the size of getObjectSummaries() List is the same number of objects that are stored under a bucket? 回答1: No -- it is more

Will getObjectSummaries get the count of objects stored in a S3 Bucket?

拟墨画扇 提交于 2020-01-04 09:27:29
问题 I need to know the number of files that are stored under S3 bucket. Currently, ObjectListing doesn't has a method such as count or numberOfObject . However, it has a method that will return a List of S3ObjectSummary . public java.util.List<S3ObjectSummary> getObjectSummaries() Since it is a List , I can call size() method but is it accurate and right thing to assume that the size of getObjectSummaries() List is the same number of objects that are stored under a bucket? 回答1: No -- it is more

How can i enforce file type uploads with an AWS S3 bucket policy

冷暖自知 提交于 2020-01-04 07:54:42
问题 Using a bucket policy for AWS S3, is it possible to enforce that the file being uploaded (PutObject) has the file extension ".txt"? If so what would that bucket policy look like? 回答1: Something like this: { "Version":"2008-10-17", "Statement": [ { "Sid":"AddPerm", "Effect":"Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::bucket/*.txt" } ] } 来源: https://stackoverflow.com/questions/17050308/how-can-i-enforce-file-type-uploads

SignatureDoesNotMatch error when uploading to s3 via a pre signed url using Ionic 2

点点圈 提交于 2020-01-04 07:47:12
问题 I am trying to upload a video to s3 and have a pre-signed PUT url. The following is the code to do so. import {Component} from '@angular/core'; import {NavController} from 'ionic-angular'; import {MediaCapture} from 'ionic-native'; import {Http} from '@angular/http'; import { Transfer } from 'ionic-native'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { public base64Image: string; constructor(private navController: NavController, public http: Http) {

Redirect with no auth

笑着哭i 提交于 2020-01-04 07:42:45
问题 According to the docs, it should be as simple as: data = self.http_pool.urlopen('GET', file_url, preload_content=False, retries=max_download_retries) request.add_unredirected_header(key, header) Add a header that will not be added to a redirected request. But I cannot seem to find any examples on how this can be achieved. I am using the pyupdater to download updates from bitbucket and launch the newest version of exe. I am using this library to create a script that connects to bitbucket fine,

AWS S3 GetObjectAsync Hangs/Times Out

自古美人都是妖i 提交于 2020-01-04 07:03:12
问题 Note: Answering my own question to help others in the future. I'm following the official documentation to get a text file from an S3 bucket and it hangs: static async Task ReadObjectDataAsync() { string responseBody = ""; try { GetObjectRequest request = new GetObjectRequest { BucketName = bucketName, Key = keyName }; //THIS NEXT LINE HANGS!!!! using (GetObjectResponse response = await client.GetObjectAsync(request)) using (Stream responseStream = response.ResponseStream) using (StreamReader

AWS S3 GetObjectAsync Hangs/Times Out

与世无争的帅哥 提交于 2020-01-04 07:02:53
问题 Note: Answering my own question to help others in the future. I'm following the official documentation to get a text file from an S3 bucket and it hangs: static async Task ReadObjectDataAsync() { string responseBody = ""; try { GetObjectRequest request = new GetObjectRequest { BucketName = bucketName, Key = keyName }; //THIS NEXT LINE HANGS!!!! using (GetObjectResponse response = await client.GetObjectAsync(request)) using (Stream responseStream = response.ResponseStream) using (StreamReader

Amazon S3 object expiration

人盡茶涼 提交于 2020-01-04 06:29:10
问题 I read a lot about how to expire(delete) amazon S3 object and tried to setup lifecycle rule to do that, however the objects are not removed, wondering what i did wrong. i have objects on S3 organized like this: Amazon S3 > my-test-bucket > my-test-org > a.csv, b.xml, c.xsl... i need to delete all those files in my-test-bucket if they are 365 days old. there are lots of files there more than 2 years old so with this rule those files should be removed. in the lifecycle rule, i specified adding

AWS EMR 4.0 - How can I add a custom JAR step to run shell commands

跟風遠走 提交于 2020-01-04 05:58:36
问题 I am trying to run shell commands using steps on EMR 4.0.0 and used this link for reference - http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-hadoop-script.html But I want to know from where to put 'command-runner.jar' in 'JAR location' field http://i.stack.imgur.com/CRicz.png I kept 'command-runner.jar' in AWS s3 and tried to load it from that location and in 'Arguments' gave s3 location of my 'example.sh' file and after adding step it failed giving this exception