How do I write out a large data file to a CSV file in chunks?
I have a set of large data files (1M rows x 20 cols). However, only 5 or so columns of that data is of
Why don't you only read the columns of interest and then save it?
file_in = os.path.join(folder, filename) file_out = os.path.join(folder, new_folder, 'new_file' + filename) df = pd.read_csv(file_in, sep='\t', skiprows=(0, 1, 2), header=0, names=['TIME', 'STUFF']) df.to_csv(file_out)