Dask Memory Error when running df.to_csv()
问题 I am trying to index and save large csvs that cannot be loaded into memory. My code to load the csv, perform a computation and index by the new values works without issue. A simplified version is: cluster = LocalCluster(n_workers=6, threads_per_worker=1) client = Client(cluster, memory_limit='1GB') df = dd.read_csv(filepath, header=None, sep=' ', blocksize=25e7) df['new_col'] = df.map_partitions(lambda x: some_function(x)) df = df.set_index(df.new_col, sorted=False) However, when I use large