问题
I'm trying to understand how the container allocates memory in YARN and their performance based on different hardware configuration.
So, the machine has 30 GB RAM and I picked 24 GB for YARN and leave 6 GB for the system.
yarn.nodemanager.resource.memory-mb=24576
Then I followed http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html to come up with some vales for Map & Reduce tasks memory.
I leave these two to their default value:
mapreduce.map.memory.mb
mapreduce.map.java.opts
But I change these two configuration:
mapreduce.reduce.memory.mb=20480
mapreduce.reduce.java.opts=Xmx16384m
But when I place a job with that setting, I'm getting error and the job is killed by force:
2015-03-10 17:18:18,019 ERROR [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1426006703004_0004_r_000000_0
The only value which worked for me so far is setting reducer memory <= 12 GB, but why is that? Why I cannot allocate more memory or up to (2 * RAM-per-container?
So what I'm missing here? Is there any thing I need to consider as well while setting up those values for better performance?
回答1:
Resolved this issue by changing yarn.scheduler.maximum-allocation-mb value. In YARN, jobs must not use more memory than the server-side config yarn.scheduler.maximum-allocation-mb. Though I set the value for yarn.nodemanager.resource.memory-mb but it should also reflect the maximum allocation size. So after updating maximum allocation, job worked as expected:
yarn.nodemanager.resource.memory-mb=24576
yarn.scheduler.maximum-allocation-mb=24576
来源:https://stackoverflow.com/questions/28970528/could-not-deallocate-container-for-task-attemptid-nnn