Full utilization of all cores in Hadoop pseudo-distributed mode

女生的网名这么多〃 提交于 2019-11-30 07:30:26

问题


I am running a task in pseudo-distributed mode on my 4 core laptop. How can I ensure that all cores are effectively used. Currently my job tracker shows that only one job is executing at a time. Does that mean only one core is used?

The following are my configuration files.

conf/core-site.xml:

<configuration>
   <property>
       <name>fs.default.name</name>
       <value>hdfs://localhost:9000</value>
   </property>
 </configuration>

conf/hdfs-site.xml:

<configuration>
  <property>
       <name>dfs.replication</name>
       <value>1</value>
  </property>
</configuration>

conf/mapred-site.xml:

<configuration>
   <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>  
   </property>

</configuration>

EDIT: As per the answer, I need to add the following properties in mapred-site.xml

 <property>
     <name>mapred.map.tasks</name> 
     <value>4</value> 
  </property>
  <property>
     <name>mapred.reduce.tasks</name> 
     <value>4</value> 
  </property>

回答1:


mapred.map.tasks and mapred.reduce.tasks will control this, and (I believe) would be set in mapred-site.xml. However this establishes these as cluster-wide defaults; more usually you would configure these on a per-job basis. You can set the same params on the java command line with -D




回答2:


mapreduce.tasktracker.map.tasks.maximum and mapreduce.tasktracker.reduce.tasks.maximum properties control the number of map and reduce tasks per node. For a 4 core processor, start with 2/2 and from there change the values if required. A slot is a map or a reduce slot, setting the values to 4/4 will make the Hadoop framework launch 4 map and 4 reduce tasks simultaneously. A total of 8 map and reduce tasks run at a time on a node.

mapred.map.tasks and mapred.reduce.tasks properties control the total number of map/reduce tasks for the job and not the # of tasks per node. Also, mapred.map.tasks is a hint to the Hadoop framework and the total # of map tasks for the job equals the # of InputSplits.



来源:https://stackoverflow.com/questions/8357296/full-utilization-of-all-cores-in-hadoop-pseudo-distributed-mode

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!