boto3

How to get authenticated identity response from AWS Cognito using boto3

China☆狼群 提交于 2020-06-27 08:58:19
问题 I would like to use boto3 to get temporary credentials for access AWS services. The use case is this: A user in my Cognito User Pool logs in to my server and I want the server code to provide that user with temporary credentials to access other AWS services. I have a Cognito User Pool where my users are stored. I have a Cognito Identity Pool that does NOT allow unauthorized access, only access by users from the Cognito User Pool. So here is the code I am starting with: import boto3 client =

s3 urls - get bucket name and path

本小妞迷上赌 提交于 2020-06-24 05:30:52
问题 I have a variable which has the aws s3 url s3://bucket_name/folder1/folder2/file1.json I want to get the bucket_name in a variables and rest i.e /folder1/folder2/file1.json in another variable. I tried the regular expressions and could get the bucket_name like below, not sure if there is a better way. m = re.search('(?<=s3:\/\/)[^\/]+', 's3://bucket_name/folder1/folder2/file1.json') print(m.group(0)) How do I get the rest i.e - folder1/folder2/file1.json ? I have checked if there is a boto3

Reading contents of a gzip file from a AWS S3 in Python

无人久伴 提交于 2020-06-24 05:04:09
问题 I am trying to read some logs from a Hadoop process that I run in AWS. The logs are stored in an S3 folder and have the following path. bucketname = name key = y/z/stderr.gz Here Y is the cluster id and z is a folder name. Both of these act as folders(objects) in AWS. So the full path is like x/y/z/stderr.gz. Now I want to unzip this .gz file and read the contents of the file. I don't want to download this file to my system wants to save contents in a python variable. This is what I have

Is there now a way to get “AWS region names” in boto3?

a 夏天 提交于 2020-06-23 06:27:48
问题 Is there now a way in boto3 to convert AWS region codes to AWS region names, e.g to convert ('us-west-1', 'us-east-1', 'us-west-2') to ('N. California', 'N. Virginia', 'Oregon') ? I can get a list of AWS region codes with the following snippet: from boto3.session import Session s = Session() regions = s.get_available_regions('rds') print("regions:", regions) $ python3 regions.py regions: ['ap-northeast-1', 'ap-northeast-2', 'ap-south-1', 'ap-southeast-1', 'ap-southeast-2', 'ca-central-1', 'eu

dynamodb how to query by sort key only?

不打扰是莪最后的温柔 提交于 2020-06-22 13:33:56
问题 I have written some python code, I want to query dynamoDB data by sort key. I remember I can use follow-up code successful: table.query(KeyConditionExpression=Key('event_status').eq(event_status)) My table structure column primary key:event_id sort key: event_status 回答1: You have to create a global secondary index (GSI) for the sort key in order to query on it alone. 回答2: The scan API should be used if you would like to get data from DynamoDB without using Hash Key attribute value. Example:-

dynamodb how to query by sort key only?

白昼怎懂夜的黑 提交于 2020-06-22 13:32:21
问题 I have written some python code, I want to query dynamoDB data by sort key. I remember I can use follow-up code successful: table.query(KeyConditionExpression=Key('event_status').eq(event_status)) My table structure column primary key:event_id sort key: event_status 回答1: You have to create a global secondary index (GSI) for the sort key in order to query on it alone. 回答2: The scan API should be used if you would like to get data from DynamoDB without using Hash Key attribute value. Example:-

SSH to EC2 instance using boto on private IP through bastion server

家住魔仙堡 提交于 2020-06-15 10:10:15
问题 I am trying to execute some bash script on EC2 instance using boto . Boto provides a way SSH to EC2 instance on public IP but in my case the instances have only private IP. The way SSH is done on these instance is using a host which can SSH on all the instance using private IP (Bastion host). Following is the script to connect to instance on public IP: s3_client = boto3.client('s3') s3_client.download_file('mybucket','key/mykey.pem', '/tmp/mykey.pem') k = paramiko.RSAKey.from_private_key_file

I would like to export DynamoDB Table to S3 bucket in CSV format using Python (Boto3)

…衆ロ難τιáo~ 提交于 2020-06-13 08:20:13
问题 This question has been asked earlier in the following link: How to write dynamodb scan data's in CSV and upload to s3 bucket using python? I have amended the code as advised in the comments. The code looks like as follows: import csv import boto3 import json dynamodb = boto3.resource('dynamodb') db = dynamodb.Table('employee_details') def lambda_handler(event, context): AWS_BUCKET_NAME = 'session5cloudfront' s3 = boto3.resource('s3') bucket = s3.Bucket(AWS_BUCKET_NAME) path = '/tmp/' +

Ungzipping chunks of bytes from from S3 using iter_chunks()

冷暖自知 提交于 2020-06-13 05:01:35
问题 I am encountering issues ungzipping chunks of bytes that I am reading from S3 using the iter_chunks() method from boto3 . The strategy of ungzipping the file chunk-by-chunk originates from this issue. The code is as follows: dec = zlib.decompressobj(32 + zlib.MAX_WBITS) for chunk in app.s3_client.get_object(Bucket=bucket, Key=key)["Body"].iter_chunks(2 ** 19): data = dec.decompress(chunk) print(len(chunk), len(data)) # 524288 65505 # 524288 0 # 524288 0 # ... This code initially prints out

Ungzipping chunks of bytes from from S3 using iter_chunks()

拈花ヽ惹草 提交于 2020-06-13 05:00:36
问题 I am encountering issues ungzipping chunks of bytes that I am reading from S3 using the iter_chunks() method from boto3 . The strategy of ungzipping the file chunk-by-chunk originates from this issue. The code is as follows: dec = zlib.decompressobj(32 + zlib.MAX_WBITS) for chunk in app.s3_client.get_object(Bucket=bucket, Key=key)["Body"].iter_chunks(2 ** 19): data = dec.decompress(chunk) print(len(chunk), len(data)) # 524288 65505 # 524288 0 # 524288 0 # ... This code initially prints out