问题
I'm seeing a memory leak when using boto to upload files. Am I doing something wrong here? Memory usage seems to increase less consistently if I remove the sleep or if I don't alternate between two different buckets.
import time, resource, os
import boto
conn = boto.connect_s3()
for i in range(20):
print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
path = 'test.png'
bucket = conn.lookup('jca-screenshots-' + ('thumbs' if i % 2 == 0 else 'normal'))
k = boto.s3.key.Key(bucket)
k.key = os.path.basename(path)
k.set_contents_from_filename(path)
time.sleep(5)
Sample output:
12406784
13123584
13242368
13344768
13398016
13422592
13484032
13524992
13553664
13590528
13656064
13664256
回答1:
Solved by switching libs: https://github.com/tax/python-requests-aws
import time, resource, os
import requests
from awsauth import S3Auth
with open(os.path.expanduser("~/.boto")) as f:
lines = f.read().splitlines()
ACCESS_KEY = lines[1].split(' = ')[1]
SECRET_KEY = lines[2].split(' = ')[1]
for i in range(20):
print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
url = 'http://{}.s3.amazonaws.com/{}'.format(
'jca-screenshots-' + ('thumbs' if i % 2 == 0 else 'normal'), 'test.png')
with open('test.png', 'rb') as f:
resp = requests.put(url, data=f, auth=S3Auth(ACCESS_KEY, SECRET_KEY))
print 'resp:', resp
time.sleep(5)
来源:https://stackoverflow.com/questions/33067814/boto-set-contents-from-filename-memory-leak