Spark Driver memory and Application Master memory

倾然丶 夕夏残阳落幕 提交于 2021-02-05 20:26:50

问题


Am I understanding the documentation for client mode correctly?

  1. client mode is opposed to cluster mode where the driver runs within the application master?
  2. In client mode the driver and application master are separate processes and therefore spark.driver.memory + spark.yarn.am.memory must be less than the machine's memory?
  3. In client mode is the driver memory is not included in the application master memory setting?

回答1:


client mode is opposed to cluster mode where the driver runs within the application master?

Yes, When Spark application deployed over YARN in

  • Client mode, driver will be running in the machine where application got submitted and the machine has to be available in the network till the application completes.
  • Cluster mode, driver will be running in application master(one per spark application) node and machine submitting the application need not to be in network after submission

Client mode

Client mode

Cluster mode

If Spark application is submitted with cluster mode on its own resource manager(standalone) then the driver process will be in one of the worker nodes.

References for images and content:

  • StackOverflow - Spark on yarn concept understanding
  • Cloudera Blog - Apache Spark Resource Management and YARN App Models

In client mode the driver and application master are separate processes and therefore spark.driver.memory + spark.yarn.am.memory must be less than the machine's memory?

No, In client mode, driver and AM are separate processes and exists in different machines, so memory need not to be combined but spark.yarn.am.memory + some overhead should be less then YARN container memory(yarn.nodemanager.resource.memory-mb). If it exceeds YARN's Resource Manager will kill the container.

In client mode is the driver memory is not included in the application master memory setting?

Here spark.driver.memory must be less then the available memory in the machine from where the spark application is going to launch.

But, In cluster mode use spark.driver.memory instead of spark.yarn.am.memory.

spark.yarn.am.memory : 512m (default)

Amount of memory to use for the YARN Application Master in client mode, in the same format as JVM memory strings (e.g. 512m, 2g). In cluster mode, use spark.driver.memory instead. Use lower-case suffixes, e.g. k, m, g, t, and p, for kibi-, mebi-, gibi-, tebi-, and pebibytes, respectively.

Check more about these properties here




回答2:


In client mode, the driver is launched directly within the spark-submit i.e client program. The application master to be created in any one of node in cluster. The spark.driver.memory (+ memory overhead) to be less than machine's memory.

In cluster mode, driver is running inside the application master in any of node in the cluster.

https://blog.cloudera.com/blog/2014/05/apache-spark-resource-management-and-yarn-app-models/



来源:https://stackoverflow.com/questions/50402020/spark-driver-memory-and-application-master-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!