amazon-s3

How do I add or modify the Content-Disposition of an existing object in Amazon S3?

只愿长相守 提交于 2020-01-15 06:34:05
问题 We have hundreds of object in an AWS S3 bucket which don't have content disposition set. I'm using Ruby aws-sdk gem. How do you add or change content disposition to these objects, WITHOUT re-uploading the files again? I have tried obj.write(:content_disposition => 'attachment') obj.copy_from(obj.key, :content_disposition => 'attachment') and also copy_to(), move_to(), but none of these seem to work in adding the content disposition to the objects. In a few cases, the objects don't seem to

Redshift copy creates different compression encodings from analyze

不羁的心 提交于 2020-01-15 06:29:28
问题 I've noticed that AWS Redshift recommends different column compression encodings from the ones that it automatically creates when loading data (via COPY) to an empty table. For example, I have created a table and loaded data from S3 as follows: CREATE TABLE Client (Id varchar(511) , ClientId integer , CreatedOn timestamp, UpdatedOn timestamp , DeletedOn timestamp , LockVersion integer , RegionId varchar(511) , OfficeId varchar(511) , CountryId varchar(511) , FirstContactDate timestamp ,

S3 Multipart upload with pause and resume functionality

穿精又带淫゛_ 提交于 2020-01-15 05:30:06
问题 I am trying to achieve s3 multipart upload with pause and resume options. I am using s3-upload-stream npm package for this. It works fine until user closes the application. When the user closes the application accidentally it means I have to resume the upload manually, so using I am this method as they mentioned: var upload = s3Stream.upload( { Bucket: "bucket-name", Key: "key-name", ACL: "public-read", StorageClass: "REDUCED_REDUNDANCY", ContentType: "binary/octet-stream" }, { UploadId:

UserFrosting & AWS SDK

假如想象 提交于 2020-01-15 05:14:10
问题 I have the following code working as expected outside UserFrosting: <?php echo "Hello World.<br>"; require_once '../vendor/autoload.php'; use Aws\Common\Aws; $aws = Aws::factory('../aws/aws-config.json'); $client = $aws->get('S3'); $bucket = 'my-public-public'; $iterator = $client->getIterator('ListObjects', array( 'Bucket' => $bucket )); foreach ($iterator as $object) { echo $object['Key'] . "<br>"; } On my UserFrosting instance I managed to successfully load aws-sdk-php with Composer: -

Amazon S3, Syncing, Modified date vs. Uploaded Date

荒凉一梦 提交于 2020-01-15 03:21:28
问题 We're using the AWS SDK for .NET and I'm trying to pinpoint where we seem to be having a sync problem with our consumer applications. Basically we have a push-service that generates changeset files that get uploaded to S3, and our consumer applications are supposed to download these files and apply them in order to sync up to the correct state, which is not happening. There's some conflicting views on what/where the correct datestamps are represented. Our consumers were written to look at the

S3 notification when file is overwritten, or deleted

江枫思渺然 提交于 2020-01-14 22:44:53
问题 since we store our log files on S3 and to meet PCI requirements we have to be notified when someone tampers with the log files. How can I be notified every time a put request is placed that replaces an existing object, or when an existing object is delete. The alert should not fire if a new object is created unless it replaces an existing one. 回答1: S3 does not currently provide deletion or overwrite-only notifications. Deletion notifications were added after the initial launch of the

Amazon S3 Bucket naming convention causes conflict with certificate

风格不统一 提交于 2020-01-14 19:20:28
问题 I renamed an Amazon S3 Bucket from: "multi-word-name" to what Amazon suggests as being the best naming convention: "multi.word.name" The problem is that this causes a problem with the SSL certificate: "You attempted to reach multi.word.name.s3.amazonaws.com, but instead you actually reached a server identifying itself as *.s3.amazonaws.com ..." Any ideas? Thanks 回答1: There are two potential issues involved with bucket naming in Amazon S3. DNS compliant bucket naming and SSL verification in

Upload a base64 string(Image Data) to S3 server in Python using Boto3 and get URL in return

*爱你&永不变心* 提交于 2020-01-14 17:42:55
问题 I am trying to upload a base 64 string , which is basically image data to an S3 bucket using Python. I have googled and got a few answers but none of them works for me. And some answers use boto and not boto3, hence they are useless to me. I have also tried this link: Boto3: upload file from base64 to S3 but it is not working for me as Object method is unknown to the s3. Following is my code so far: import boto3 s3 = boto3.client('s3') filename = photo.personId + '.png' bucket_name = 'photos

Upload a base64 string(Image Data) to S3 server in Python using Boto3 and get URL in return

戏子无情 提交于 2020-01-14 17:39:13
问题 I am trying to upload a base 64 string , which is basically image data to an S3 bucket using Python. I have googled and got a few answers but none of them works for me. And some answers use boto and not boto3, hence they are useless to me. I have also tried this link: Boto3: upload file from base64 to S3 but it is not working for me as Object method is unknown to the s3. Following is my code so far: import boto3 s3 = boto3.client('s3') filename = photo.personId + '.png' bucket_name = 'photos

Can s3cmd retrieve metadata of an object on Amazon S3?

回眸只為那壹抹淺笑 提交于 2020-01-14 14:45:29
问题 With s3cmd sync command I can backup encrypted files on S3 to local storage. When trying to restore these files back to S3 I have to set metadata like x-amz-meta-x-amz-key and x-amz-meta-x-amz-iv for each file. My question is how to use s3cmd for retrieve metadata of an object on Amazon S3? 回答1: Upstream github.com/s3tools/s3cmd master branch has this commit now which does emit all metadata in the info command. commit 36352241089e9b9661d9ee586dc19085f4bb13c9 Author: Andrew Gaul Date: Tue Mar