I\'m copying a file from S3 to Cloudfiles, and I would like to avoid writing the file to disk. The Python-Cloudfiles library has an object.stream() call that looks to be wh
The Key object in boto, which represents on object in S3, can be used like an iterator so you should be able to do something like this:
>>> import boto
>>> c = boto.connect_s3()
>>> bucket = c.lookup('garnaat_pub')
>>> key = bucket.lookup('Scan1.jpg')
>>> for bytes in key:
... write bytes to output stream
Or, as in the case of your example, you could do:
>>> shutil.copyfileobj(key, rsObject.stream())