How to avoid Spark executor from getting lost and yarn container killing it due to memory limit?

前端 未结 2 1646
慢半拍i
慢半拍i 2020-12-13 20:50

I have the following code which fires hiveContext.sql() most of the time. My task is I want to create few tables and insert values into after processing for all

相关标签:
2条回答
  • 2020-12-13 21:00

    Generally, you should always dig into logs to get the real exception out (at least in Spark 1.3.1).

    tl;dr
    safe config for Spark under Yarn
    spark.shuffle.memoryFraction=0.5 - this would allow shuffle use more of allocated memory
    spark.yarn.executor.memoryOverhead=1024 - this is set in MB. Yarn kills executors when its memory usage is larger then (executor-memory + executor.memoryOverhead)

    Little more info

    From reading your question you mention that you get shuffle not found exception.

    In case of org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle you should increase spark.shuffle.memoryFraction, for example to 0.5

    Most common reason for Yarn killing off my executors was memory usage beyond what it expected. To avoid that you increase spark.yarn.executor.memoryOverhead , I've set it to 1024, even if my executors use only 2-3G of memory.

    0 讨论(0)
  • 2020-12-13 21:19

    This is my assumption: you must be having limited executors on your cluster and job might be running in shared environment.

    As you said, your file size is small, you can set a smaller number of executors and increase executor cores and setting the memoryOverhead property is important here.

    1. Set number of executors = 5
    2. Set number of execuotr cores = 4
    3. Set memory overhead = 2G
    4. shuffle partition = 20 (to use maximum parallelism based on executors and cores)

    Using above property I am sure you will avoid any executor out of memory issues without compromising performance.

    0 讨论(0)
提交回复
热议问题