boto3

boto3 wait_until_running doesn't work as desired

巧了我就是萌 提交于 2019-12-25 09:36:38
问题 I'm trying to write a script using boto3 to start an instance and wait until it is started. As per the documentation of wait_until_running, it should wait until the instance is fully started (I"m assuming checks should be OK) but unfortunately it only works for wait_until_stopped and incase of wait_until_running it just starts the instance and doesn't wait until it is completely started. Not sure if I'm doing something wrong here or this is a bug of boto3. Here is the code: import boto3 ec2 =

boto3 signature doesn't match with S3

不问归期 提交于 2019-12-25 08:00:08
问题 I'm trying to make an upload from Heroku to S3 using boto3 but I keep getting the error <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message> . I've tried using a pre-signed post and a pre-signed url but the error persists. The credentials that I'm providing to Heroku to make the request are my root AWSAccessKeyID and Secret key, so I shouldn't have issues with permissions.

Get volume information associated with Instance

馋奶兔 提交于 2019-12-25 07:39:40
问题 I'm trying to retrieve all the volumes associated with an instance. if volume.attachment_state() == 'attached': volumesinstance = ec2_connection.get_all_instances() ids = [z for k in volumesinstance for z in k.instances] for s in ids: try: tags = s.tags instance_name = tags["Name"] print (instance_name) except Exception as e: print e However, it's not working as intended. 回答1: You can add filters in get_all_instances method like this: filter = {'block-device-mapping.volume-id': volume.id}

How do I read a gzipped parquet file from S3 into Python using Boto3?

微笑、不失礼 提交于 2019-12-25 01:43:50
问题 I have a file called data.parquet.gzip on my S3 bucket. I can't figure out what's the problem in reading it. Normally I've worked with StringIO but I don't know how to fix it. I want to import it from S3 into my Python jupyter notebook session using pandas and boto3. 回答1: The solution is actually quite straightforward. import boto3 # For read+push to S3 bucket import pandas as pd # Reading parquets from io import BytesIO # Converting bytes to bytes input file import pyarrow # Fast reading of

boto3 pricing returns multiple values for same type of instances

不打扰是莪最后的温柔 提交于 2019-12-25 00:10:02
问题 I am trying the following code to get the prices of instances in my region: import boto3 import json my_session = boto3.session.Session() region = boto3.session.Session().region_name print "region : ",region pricing_client = boto3.client("pricing") pricingValues = pricing_client.get_products(ServiceCode='AmazonEC2',Filters=[{'Type': 'TERM_MATCH','Field': 'instanceType','Value': 'm4.large'},{'Type': 'TERM_MATCH','Field': 'location','Value': 'Asia Pacific (Mumbai)'},{'Type': 'TERM_MATCH','Field

Best way to upload data with boto3 to DynamoDB?

假如想象 提交于 2019-12-24 23:59:49
问题 I'm using Python boto3 to upload data to AWS. I have a dedicated connection to AWS of 350 Mbps. I have a large JSON file and I would like to know if it is better to upload this information directly to DynamoDB or instead it is better to upload this on S3 first and then using data pipeline to upload this to DynamodDB? My data is already clean and it doesn't need to be processed. I just need to expose this information to DynamoDB in the most efficient and reliable way. My script will be run on

Python Boto3 S3 Bucket Encryption result in 0 bytes file

北城以北 提交于 2019-12-24 20:52:12
问题 Needing some help here. I have a Python script using Boto3 that does S3 Bucket encryption. It was working fine before this and just recently I noticed that when I use the script, it will cause the object to become 0 bytes. It is working just fine if I were to encrypt it manually via the console. Is anyone facing similar issue and would you mind to share any workaround? At least to recover back the files. I am clueless here. I've did a quick troubleshooting and found out that when below line

What does the batch._client.describe_endpoints function do?

孤街醉人 提交于 2019-12-24 19:40:28
问题 Offshoot from this question. Very simple question. table = boto3.resources('dynamodb').Table('TableName') with table.batch_writer() as batch: batch.put_item(Items=[list_of_items]) batch._client.describe_endpoints() Returns a response that looks like this. {'RequestId': 'U4BS4PNCBKA9JO3M7TIMGDFSMJVV4KQNSO5AEMVJF66Q9ASUAAJH', 'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'Server', 'date': 'Wed, 27 Mar 2019 05:05:03 GMT', 'content-type': 'application/x-amz-json-1.0', 'content-length': '90',

query cloudwatch logs for distinct values using boto3 in python

痴心易碎 提交于 2019-12-24 18:49:28
问题 I have a lambda function that writes metrics to Cloudwatch . While, it writes metrics, It generates some logs in a log-group. INFO:: username: simran+test@abc.com ClinicID: 7667 nodename: MacBook-Pro-2.local INFO:: username: simran+test2@abc.com ClinicID: 7669 nodename: MacBook-Pro-3.local INFO:: username: simran+test@abc.com ClinicID: 7668 nodename: MacBook-Pro-4.local INFO:: username: simran+test3@abc.com ClinicID: 7667 nodename: MacBook-Pro-5.local INFO:: username: simran+test3@abc.com

Copying AWS Snapshots using boto3

删除回忆录丶 提交于 2019-12-24 18:27:03
问题 I have piece of code for source and destination regions. I managed to have a reponse with all snapshot data but I can't manage to filter the response just to "SnapshotId" and copying it. import boto3 REGIONS = ['eu-central-1', 'eu-west-3'] SOURCEREG = boto3.client('ec2', region_name='eu-central-1') DISTREG = boto3.client('ec2', region_name='eu-west-3') response = SOURCEREG.describe_snapshots() print(response) In this case I receive a json response looking like {'OwnerId': 'xxxxxxx',