问题
Can I write a Hadoop code that has only Mappers and Combiners (i.e. mini-reducers with no reducer)?
job.setMapperClass(WordCountMapper.class);
job.setCombinerClass(WordCountReducer.class);conf.setInt("mapred.reduce.tasks", 0);
I was trying to do so but I always see that I have one reduce task on the job tracker link
Launched reduce tasks = 1
How can I delete reducers while keeping combiners? is that possible?
回答1:
In the case you describe you should use Reducers. Use as key: Context.getInputSplit().getPath() + Context.getInputSplit().getStart() - this combination is unique for each Mapper.
回答2:
You need to tell your job that you don't care about the reducer: JobConf.html#setNumReduceTasks(int)
// new Hadoop API
jobConf.setNumReduceTasks(0);
// old Hadoop API
job.setNumReduceTasks(0);
You can achieve the something with IdentityReducer.
Performs no reduction, writing all input values directly to the output.
I'm not sure whether you can keep combiners but I will start with the previous lines.
来源:https://stackoverflow.com/questions/22173788/combiner-without-reducer-in-hadoop