I currently have a script that reads the existing version of a csv saved to s3, combines that with the new rows in the pandas dataframe, and then writes that directly back t
There is a more elegant solution using smart-open (https://pypi.org/project/smart-open/)
import pandas as pd from smart_open import open df.to_csv(open('s3://bucket/prefix/filename.csv.gz','w'),index = False)