Yarn container lauch failed exception and mapred-site.xml configuration

早过忘川 提交于 2019-12-13 06:51:37

问题


I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1 Namenode + 6 datanodes.

EDIT-1@ARNON: I followed the link, mad calculation according to the hardware configruation on my nodes and have added the update mapred-site and yarn-site.xml files in my question. Still my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

mapred-site.xml has the following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts    = -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

yarn-site.xml has the following properties:

yarn.resourcemanager.hostname        = hadoop-master
yarn.nodemanager.aux-services        = mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144

EDIT-2@ARNON: Setting yarn.scheduler.minimum-allocation-mb to 4096 puts all the map task in suspended state and assigning it as 3072 crashes with the follwoing

Exception from container-launch: ExitCodeException exitCode=134: /bin/bash: line 1:  3876 Aborted  (core dumped) /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx8192m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 > 
/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout 2> 
/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr

How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?


回答1:


It seems you are allocating too much memory your tasks (even without looking at all the configurations) 8GB RAM and 8GB per map task and all of which is heap Try to use lower allocations 2Gb with 1GB heap or something like that



来源:https://stackoverflow.com/questions/28586561/yarn-container-lauch-failed-exception-and-mapred-site-xml-configuration

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!