The documentation for Dask talks about repartioning to reduce overhead here.
They however seem to indicate you need some knowledge of what your dataframe will look l
As of Dask 2.0.0 you may call .repartition(partition_size="100MB").
.repartition(partition_size="100MB")
This method performs an object-considerate (.memory_usage(deep=True)) breakdown of partition size. It will join smaller partitions, or split partitions that have grown too large.
.memory_usage(deep=True)
Dask's Documentation also outlines the usage.