amazon-glacier

How can I calculate an AWS API signature (v4) in python?

对着背影说爱祢 提交于 2021-02-07 12:17:32
问题 I'm attempting to generate a signature for an Amazon Glacier upload request, using the example requests and example functions provided by the AWS documentation, but I can't make it work. At this point, I'm certain I'm missing something incredibly obvious: #!/bin/env python import hmac import hashlib # This string to sign taken from: http://docs.amazonwebservices.com/amazonglacier/latest/dev/amazon-glacier-signing-requests.html#example-signature-calculation sts = """AWS4-HMAC-SHA256

How can I calculate an AWS API signature (v4) in python?

倖福魔咒の 提交于 2021-02-07 12:17:29
问题 I'm attempting to generate a signature for an Amazon Glacier upload request, using the example requests and example functions provided by the AWS documentation, but I can't make it work. At this point, I'm certain I'm missing something incredibly obvious: #!/bin/env python import hmac import hashlib # This string to sign taken from: http://docs.amazonwebservices.com/amazonglacier/latest/dev/amazon-glacier-signing-requests.html#example-signature-calculation sts = """AWS4-HMAC-SHA256

How can I calculate an AWS API signature (v4) in python?

为君一笑 提交于 2021-02-07 12:16:51
问题 I'm attempting to generate a signature for an Amazon Glacier upload request, using the example requests and example functions provided by the AWS documentation, but I can't make it work. At this point, I'm certain I'm missing something incredibly obvious: #!/bin/env python import hmac import hashlib # This string to sign taken from: http://docs.amazonwebservices.com/amazonglacier/latest/dev/amazon-glacier-signing-requests.html#example-signature-calculation sts = """AWS4-HMAC-SHA256

Amazon S3 lifecycle retroactive application

只谈情不闲聊 提交于 2021-02-06 09:35:06
问题 Fairly straightforward question. Do Amazon S3 lifecycle rules that I set get applied to data retroactively? If so, what sort of delay might I see before older data begins to be archived or deleted? By way of example, let's say I have a bucket with 3 years of backed up data. If I create a new lifecycle where that data will be archived after 31 days, and deleted after 365 days, will that new rule be applied to the existing data? How soon will it begin to be enforced? 回答1: Yes it's retroactive

Trying to restore glacier deep archive to different s3 bucket

﹥>﹥吖頭↗ 提交于 2021-01-29 17:40:14
问题 I am trying to restore the glacier deep archive to a different s3 bucket, but when I run the below command getting error : fatal error: An error occurred (404) when calling the HeadObject operation: Key "cf-ant-prod" does not exist aws s3 cp s3://xxxxxxx/cf-ant-prod s3://xxxxxxx/atest --force-glacier-transfer --storage-class STANDARD --profile xxx 来源: https://stackoverflow.com/questions/63830307/trying-to-restore-glacier-deep-archive-to-different-s3-bucket

Accessing stream in job.get_output('body')

大兔子大兔子 提交于 2021-01-28 07:55:25
问题 Sample code import boto3 glacier = boto3.resource('glacier') job = glacier.Job(accountID, vaultlist[0], id=joblist[0]) r = job.get_output() print(r0['body']) That print only yields botocore.response.StreamingBody at 0xsnip r0['body'] should be the inventory in CSV format, but I can't figure out how to get to it. I spent a bit of time trying to us io to read in the steam and either that is not the right way or I did it wrong. Can you point me in the right direction? Thanks! 回答1: Here's a

How to upload a file to amazon Glacier using Nodejs?

隐身守侯 提交于 2020-05-27 06:45:34
问题 I found this example on the amazon aws docs. var glacier = new AWS.Glacier(), vaultName = 'YOUR_VAULT_NAME', buffer = new Buffer(2.5 * 1024 * 1024); // 2.5MB buffer var params = {vaultName: vaultName, body: buffer}; glacier.uploadArchive(params, function(err, data) { if (err) console.log("Error uploading archive!", err); else console.log("Archive ID", data.archiveId); }); But I don't understand where my file goes, or how to send it to the glacier servers? 回答1: The file is stored in the

How to upload a file to amazon Glacier using Nodejs?

天涯浪子 提交于 2020-05-27 06:45:05
问题 I found this example on the amazon aws docs. var glacier = new AWS.Glacier(), vaultName = 'YOUR_VAULT_NAME', buffer = new Buffer(2.5 * 1024 * 1024); // 2.5MB buffer var params = {vaultName: vaultName, body: buffer}; glacier.uploadArchive(params, function(err, data) { if (err) console.log("Error uploading archive!", err); else console.log("Archive ID", data.archiveId); }); But I don't understand where my file goes, or how to send it to the glacier servers? 回答1: The file is stored in the

looping through a list of archive id with cli aws describe-job returns null on output

雨燕双飞 提交于 2020-04-17 22:09:26
问题 I have a loop that gets all the ids and checks for the status of the jobs on cli. jq -r '.JobList |= unique_by(.ArchiveId) | .JobList[] | "\(.JobId)"' jobs.json\ | while IFS= read -r job_id; do job_status=$(aws glacier describe-job --account-id 2222222--vault-name my-vault --job-id $job_id --output json) echo $job_status"," >> job-status.json done My issue here is that I get the following response everytime it goes past the job_status command usage: aws [options] <command> <subcommand> [

Is it possible to automatically move objects from an S3 bucket to another one after some time?

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-25 05:09:32
问题 I have an S3 bucket which accumulates objects quite rapidly and I'd like to automatically move objects older than a week to another bucket. Is it possible to do this with a policy and if so what would the policy look like. If there can't be move to another S3 bucket is them some other automatic mechanism for archiving them potentially to glacier? 回答1: Yes you can archive automatically from S3 to Glacier. You can establish it by creating a Lifecycle Rule in the Amazon Console. http://aws