Why flink container vcore size is always 1

半世苍凉 提交于 2019-12-01 14:12:40

An answer from Kien Truong

Hi,

You have to enable CPU scheduling in YARN, otherwise, it always shows that only 1 CPU is allocated for each container, regardless of how many Flink try to allocate. So you should add (edit) the following property in capacity-scheduler.xml:

<property>
 <name>yarn.scheduler.capacity.resource-calculator</name>
 <!-- <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> -->
 <value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>

TaskManager memory is, for example, 1400MB, but Flink reserves some amount for off-heap memory, so the actual heap size is smaller.

This is controlled by 2 settings:

containerized.heap-cutoff-min: default 600MB

containerized.heap-cutoff-ratio: default 15% of TM's memory

That's why your TM's heap size is limitted to ~800MB (1400 - 600)

Regards,

Kien

@yinhua.

Use the command to start a session:./bin/yarn-session.sh, you need add -s arg.

-s,--slots Number of slots per TaskManager

details:

  1. https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/yarn_setup.html
  2. https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/cli.html#usage

I get the answer finally. It's because yarn is use "DefaultResourceCalculator" allocation strategy, so only memory is counted for yarn RM, even if flink requested 3 vcores, but yarn simply ignore the cpu core number.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!