Apache Spark: setting executor instances does not change the executors

前端 未结 4 667
盖世英雄少女心
盖世英雄少女心 2020-12-01 01:09

I have an Apache Spark application running on a YARN cluster (spark has 3 nodes on this cluster) on cluster mode.

When the application is running the Spark-UI shows

4条回答
  •  半阙折子戏
    2020-12-01 01:44

    Increase yarn.nodemanager.resource.memory-mb in yarn-site.xml

    With 12g per node you can only launch driver(3g) and 2 executors(11g).

    Node1 - driver 3g (+7% overhead)

    Node2 - executor1 11g (+7% overhead)

    Node3 - executor2 11g (+7% overhead)

    now you are requesting for executor3 of 11g and no node has 11g memory available.

    for 7% overhead refer spark.yarn.executor.memoryOverhead and spark.yarn.driver.memoryOverhead in https://spark.apache.org/docs/1.2.0/running-on-yarn.html

提交回复
热议问题