I currently have a script that reads the existing version of a csv saved to s3, combines that with the new rows in the pandas dataframe, and then writes that directly back t
Here's a solution in Python 3.5.2 using Pandas 0.20.1.
The source DataFrame can be read from a S3, a local CSV, or whatever.
import boto3
import gzip
import pandas as pd
from io import BytesIO, TextIOWrapper
df = pd.read_csv('s3://ramey/test.csv')
gz_buffer = BytesIO()
with gzip.GzipFile(mode='w', fileobj=gz_buffer) as gz_file:
df.to_csv(TextIOWrapper(gz_file, 'utf8'), index=False)
s3_resource = boto3.resource('s3')
s3_object = s3_resource.Object('ramey', 'new-file.csv.gz')
s3_object.put(Body=gz_buffer.getvalue())