I\'m invoking Pyspark with Spark 2.0 in local mode with the following command:
pyspark --executor-memory 4g --driver-memory 4g
The input da
I believe that the cause of this problem is coalesce(), which despite the fact that it avoids a full shuffle (like repartition would do), it has to shrink the data in the requested number of partitions.
Here, you are requesting all the data to fit into one partition, thus one task (and only one task) has to work with all the data, which may cause its container to suffer from memory limitations.
So, either ask for more partitions than 1, or avoid coalesce() in this case.
Otherwise, you could try the solutions provided in the links below, for increasing your memory configurations: