How to tell MapReduce how many mappers to use?

泄露秘密 提交于 2019-12-21 21:43:51

问题


I am trying to speed optimize MapReduce job.

Is there any way I can tell hadoop to use a particular number of mapper/reducer processes? Or, at least, minimal number of mapper processes?

In the documentation, it is specified, that you can do that with the method

public void setNumMapTasks(int n)

of the JobConf class.

That way is not obsolete, so I am starting the Job with Job class. What is the right way of doing this?


回答1:


The number of map tasks is determined by the number of blocks in the input. If the input file is 100MB and the HDFS block size is 64MB then the input file will take 2 blocks. So, 2 map tasks will be spawned. JobConf.setNumMapTasks() (1) a hint to the framework.

The number of reducers is set by the JboConf.setNumReduceTasks() function. This determines the total number of reduce tasks for the job. Also, the mapred.tasktracker.tasks.maximum parameter determines the number of reduce tasks which can run parallely on a single job tracker node.

You can find more information here on the number of map and reduce jobs at (2)

(1) - http://hadoop.apache.org/mapreduce/docs/r0.21.0/api/org/apache/hadoop/mapred/JobConf.html#setNumMapTasks%28int%29
(2) - http://wiki.apache.org/hadoop/HowManyMapsAndReduces



来源:https://stackoverflow.com/questions/7418277/how-to-tell-mapreduce-how-many-mappers-to-use

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!