Could not deallocate container for task attemptId NNN

ぃ、小莉子 提交于 2019-12-11 03:31:02

问题


I'm trying to understand how the container allocates memory in YARN and their performance based on different hardware configuration.

So, the machine has 30 GB RAM and I picked 24 GB for YARN and leave 6 GB for the system.

yarn.nodemanager.resource.memory-mb=24576

Then I followed http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html to come up with some vales for Map & Reduce tasks memory.

I leave these two to their default value:

mapreduce.map.memory.mb
mapreduce.map.java.opts

But I change these two configuration:

mapreduce.reduce.memory.mb=20480
mapreduce.reduce.java.opts=Xmx16384m

But when I place a job with that setting, I'm getting error and the job is killed by force:

2015-03-10 17:18:18,019 ERROR [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1426006703004_0004_r_000000_0

The only value which worked for me so far is setting reducer memory <= 12 GB, but why is that? Why I cannot allocate more memory or up to (2 * RAM-per-container?

So what I'm missing here? Is there any thing I need to consider as well while setting up those values for better performance?


回答1:


Resolved this issue by changing yarn.scheduler.maximum-allocation-mb value. In YARN, jobs must not use more memory than the server-side config yarn.scheduler.maximum-allocation-mb. Though I set the value for yarn.nodemanager.resource.memory-mb but it should also reflect the maximum allocation size. So after updating maximum allocation, job worked as expected:

yarn.nodemanager.resource.memory-mb=24576
yarn.scheduler.maximum-allocation-mb=24576


来源:https://stackoverflow.com/questions/28970528/could-not-deallocate-container-for-task-attemptid-nnn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!