boto

ssh key of newly created ec2 instance using boto

邮差的信 提交于 2019-12-12 01:45:45
问题 I am using boto to connect to EC2 and launch an instance. After creating the instance, I need to ssh to it. I need the public ssh key of the server to add that to my known hosts file. How do I get the key using boto? I do not want to bypass the key verification. I have used boto command shell, but looking at source, it looks like boto uses paramiko and bypasses checking the ssh key. Can anyone please help? 回答1: # Check to see if specified keypair already exists. # If we get an InvalidKeyPair

Upload 0 byte file to Amazon S3

泄露秘密 提交于 2019-12-11 20:08:37
问题 Is it possible to upload a 0 byte file to Amazon S3? My standard response would be to not allow it, but I have a site that would like to allow users to upload blank .txt files, which happen to be 0 bytes. Amazon returns a Malformed XML response: <Error> <Code>MalformedXML</Code> <Message>The XML you provided was not well-formed or did not validate against our published schema</Message> <RequestId>234...</RequestId> <HostId>2309fsdijflsd...w32r09s</HostId> </Error> I'm using boto==2.3.0 with

Shared python libraries between multiple APIs on AWS

£可爱£侵袭症+ 提交于 2019-12-11 16:59:31
问题 I have several different python APIs (i.e python scripts) that run using AWS lambda. The standard approach is to generate a zip file including all the external libraries that are necessary for the lambda function and then upload it to AWS. Now, I have some functions that are in common between different APIs (e.g. custom utils functions such as parse text files or dates). Currently, I am simpling duplicating the file utils.py in every zip file. However, this approach is quite inefficient (I

AWS boto Get Snapshots in Time Period

被刻印的时光 ゝ 提交于 2019-12-11 13:24:24
问题 I'm using AWS and pulling snapshots using boto ("The Python interface to Amazon Web Services"). I'm pulling all snapshots using conn.get_all_snapshots() , but I only want to retrieve the necessary data. I'm using a calendar to view the snapshots, so it would be very helpful if I could only pull the snapshots within the current month I'm viewing. Is there a restriction (maybe a filter) I can put on the conn.get_all_snapshots() to only retrieve the snapshots within the month? Here are boto docs

python dynamodb get 1000 entries

旧巷老猫 提交于 2019-12-11 13:10:54
问题 I am using amazon dynamodb and accessing it via the python boto query interface. I have a very simple requirement I want to get 1000 entries. But I don't know the primary keys beforehand. I just want to get 1000 entries. How can I do this? ...I know how to use the query_2 but that requires knowing primary keys beforehand. And maybe afterwards I want to get another different 1000 and go on like that. You can consider it as sampling without replacement.How can I do this? Any help is much

Unable to paginate EMR cluster using boto

Deadly 提交于 2019-12-11 12:34:57
问题 I have about 55 EMR clusters (all of them were terminated) and have been trying to retrieve the entire 55 EMR clusters using the list_clusters method in boto . I've been searching for examples about paginating the number of result set from boto but couldn't find any examples. Given this statement: emr_object.list_clusters(cluster_states=["TERMINATED"], marker="what_should_i_use_here").clusters I kept getting InvalidRequestException error: boto.exception.EmrResponseError: EmrResponseError: 400

Amazon S3 upload fails using boto + Python

自古美人都是妖i 提交于 2019-12-11 12:13:17
问题 Hi I am unable to upload a file to S3 using boto. It fails with the following error message. Can someone help me, i am new to python and boto. from boto.s3 import connect_to_region from boto.s3.connection import Location from boto.s3.key import Key import boto import gzip import os AWS_KEY = '' AWS_SECRET_KEY = '' BUCKET_NAME = 'mybucketname' conn = connect_to_region(Location.USWest2,aws_access_key_id = AWS_KEY, aws_secret_access_key = AWS_SECRET_KEY, is_secure=False,debug = 2 ) bucket = conn

Django media uploads to AS3

自作多情 提交于 2019-12-11 11:58:05
问题 I'm trying to upload all the Django media files (uploaded from the admin panel) to Amazon S3. So settings file look something like this : INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'tastypie', 'core', 'advertisment', 'storages', ) DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = 'xxx' AWS_SECRET

boto set_contents_from_filename memory leak

北城余情 提交于 2019-12-11 11:03:12
问题 I'm seeing a memory leak when using boto to upload files. Am I doing something wrong here? Memory usage seems to increase less consistently if I remove the sleep or if I don't alternate between two different buckets. import time, resource, os import boto conn = boto.connect_s3() for i in range(20): print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss path = 'test.png' bucket = conn.lookup('jca-screenshots-' + ('thumbs' if i % 2 == 0 else 'normal')) k = boto.s3.key.Key(bucket) k.key = os

How to compile python code that uses boto to access S3?

人盡茶涼 提交于 2019-12-11 10:05:43
问题 I'm trying to compile a simple Python program, that uploads files to an S3 bucket using the boto package, in to a single, redistributable .exe file. I'm open to any compilation method. So far I've tried both bbfreeze and py2exe and both yield the same results. The code in question that causes trouble looks like this: import boto #...snip... fname_base = os.path.basename(fname) s3 = boto.connect_s3(aws_access_key_id=_aws_key, aws_secret_access_key=_aws_secret_key, is_secure=False); bucket = s3