New to dask,I have a 1GB CSV file when I read it in dask dataframe it creates around 50 partitions after my changes in the file when I
dask
1GB
you can convert your dask dataframe to a pandas dataframe with the compute function and then use the to_csv. something like this:
compute
to_csv
df_dask.compute().to_csv('csv_path_file.csv')