问题
I just wanted to have a better understanding on using multiple mappers and reducers.I want to try this out using a simple hadoop mapreduce Word count job.I want to run two mapper and two reducer for this wordcount job.Is there that I need to configure manually on the configuration files or is it just enough to just make changes on the WordCount.java file.
I'm running this job on a Single node.And I'm running this job as
$ hadoop jar job.jar input output
And i've started
$ hadoop namenode -format
$ hadoop namenode
$ hadoop datanode
sbin$ ./yarn-daemon.sh start resourcemanager sbin$ ./yarn-daemon.sh start resourcemanager
I'm running hadoop-2.0.0-cdh4.0.0
And my WordCount.java file is
package org.apache.hadoop.examples;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.IntWritable;
import org.rg.apache.hadoop.fs.Path;
import oapache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
private static final Log LOG = LogFactory.getLog(WordCount.class);
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
//printKeyAndValues(key, values);
for (IntWritable val : values) {
sum += val.get();
LOG.info("val = " + val.get());
}
LOG.info("sum = " + sum + " key = " + key);
result.set(sum);
context.write(key, result);
//System.err.println(String.format("[reduce] word: (%s), count: (%d)", key, result.get()));
}
// a little method to print debug output
private void printKeyAndValues(Text key, Iterable<IntWritable> values)
{
StringBuilder sb = new StringBuilder();
for (IntWritable val : values)
{
sb.append(val.get() + ", ");
}
System.err.println(String.format("[reduce] key: (%s), value: (%s)", key, sb.toString()));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Could anyone of you help me now to run two mapper and the reducers for this Word count job?
回答1:
Gladnick: In case you are planning to use the default TextInputFormat, there would be atleast as many mappers at the number of input files (or more depending on the file size). So just put 2 files into your input directories so that you can get 2 mappers running. (Advising this solution, because you plan to run this as a test case).
Now that you have asked for 2 reducers, all you need to do is job.setNumReduceTasks(2) in your main befor submiting the job.
After that just prepare a jar of your application and run that in hadoop pseudo cluster.
In case you need to specify which word to go to which reducer, you can specify that in the Partitioner class.
Configuration configuration = new Configuration();
// create a configuration object that provides access to various
// configuration parameters
Job job = new Job(configuration, "Wordcount-Vowels & Consonants");
// create the job object and set job name as Wordcount-Vowels &
// Consonants
job.setJarByClass(WordCount.class);
// set the main class
job.setNumReduceTasks(2);
// set the number of reduce tasks required
job.setMapperClass(WordCountMapper.class);
// set the map class for the job
job.setCombinerClass(WordCountCombiner.class);
// set the combiner class for the job
job.setPartitionerClass(VowelConsonantPartitioner.class);
// set the partitioner class for the job
job.setReducerClass(WordCountReducer.class);
// set the reduce class for the job
job.setOutputKeyClass(Text.class);
// set the output type of key (the word) expected from the job, Text
// analogous to String
job.setOutputValueClass(IntWritable.class);
// set the output type of value (the count) expected from the job,
// IntWritable analogous to int
FileInputFormat.addInputPath(job, new Path(args[0]));
// set the input directory for fetching the input files
FileOutputFormat.setOutputPath(job, new Path(args[1]));
This should be the structure of your main program. You may include the combiner and the partitioner in case needed.
回答2:
For mappers set
mapred.max.split.size
to half the size of your file.
For reducers set them to 2 explicitly as
mapred.reduce.tasks=2
来源:https://stackoverflow.com/questions/11717495/running-two-mapper-and-two-reducer-for-simple-hadoop-mapreduce-jobs