Why cannot more than 32 cores be requested from YARN to run a job?

落爺英雄遲暮 提交于 2019-12-07 07:07:13

问题


Setup:

  • No. of nodes: 3
  • No. of cores: 32 Cores per machine
  • RAM: 410GB per machine
  • Spark Version: 1.2.0
  • Hadoop Version: 2.4.0 (Hortonworks)

Objective:

  • I want to run a Spark job with more than 32 executor cores.

Problem:

When I request more than 32 executor cores for Spark job, I get the following error:

Uncaught exception: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=150, maxVirtualCores=32
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:212)
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:501)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

Here are some properties from my yarn-site.xml:

<property>
  <name>yarn.nodemanager.resource.memory-mb</name>
  <value>400000</value>
</property>
<property>
  <name>yarn.scheduler.minimum-allocation-mb</name>
  <value>3072</value>
</property>

From above to properties, I thought I would be able to request 400k/3072 = ~130 cores. But It is still limiting me to 32. How can I assign more than 32 executor cores to a Spark job?

Please let me know if more info is needed and I will update the question.

EDIT 1:

vcore settings from yarn-site.xml

<property>
  <name>yarn.nodemanager.resource.cpu-vcores</name>
  <value>2</value>
</property>

回答1:


In yarn-site.xml set

<property>
  <name>yarn.scheduler.maximum-allocation-vcores</name>
  <value>130</value>
</property>


来源:https://stackoverflow.com/questions/29780401/why-cannot-more-than-32-cores-be-requested-from-yarn-to-run-a-job

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!