How to optimize shuffle spill in Apache Spark application

倖福魔咒の 提交于 2019-11-27 17:15:26

Learning to performance-tune Spark requires quite a bit of investigation and learning. There are a few good resources including this video. Spark 1.4 has some better diagnostics and visualisation in the interface which can help you.

In summary, you spill when the size of the RDD partitions at the end of the stage exceed the amount of memory available for the shuffle buffer.

You can:

  1. Manually repartition() your prior stage so that you have smaller partitions from input.
  2. Increase the shuffle buffer by increasing the memory in your executor processes (spark.executor.memory)
  3. Increase the shuffle buffer by increasing the fraction of executor memory allocated to it (spark.shuffle.memoryFraction) from the default of 0.2. You need to give back spark.storage.memoryFraction.
  4. Increase the shuffle buffer per thread by reducing the ratio of worker threads (SPARK_WORKER_CORES) to executor memory

If there is an expert listening, I would love to know more about how the memoryFraction settings interact and their reasonable range.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!