I have a csv file with ~50,000 rows and 300 columns. Performing the following operation is causing a memory error in Pandas (python):
merged_df.stack(0).rese
As an alternative approach you can use the library "dask" e.g:
# Dataframes implement the Pandas API import dask.dataframe as dd` df = dd.read_csv('s3://.../2018-*-*.csv')