When performing a shuffle my Spark job fails and says \"no space left on device\", but when I run df -h
it says I have free space left! Why does this happen, a
I encountered a similar problem. By default, spark uses "/tmp" to save intermediate files. When the job is running, you can tab df -h
to see the used space of fs mounted at "/" growing up. When the space of the dev is runned out of, this exception is thrown. To solve the problem, I set the SPARK_LOCAL_DIRS
in the SPARK_HOME/conf/spark_defaults.conf with a path in a fs leaving enough space.