MapReduce Output ArrayWritable

╄→гoц情女王★ 提交于 2020-01-12 14:29:50

问题


I'm trying to get an output from an ArrayWritable in a simple MapReduce-Task. I found a few questions with a similar problem, but I can't solve the problem in my own code. So I'm looking forward to your help. Thanks :)!

Input: Textfile with some sentence.

Output should be:

<Word, <length, number of same words in Textfile>>
 Example: Hello  5  2 

The output that I get in my Job is:

hello WordLength_V01$IntArrayWritable@221cf05
test WordLength_V01$IntArrayWritable@799e525a

I think the problem is in the subclass from IntArrayWritable, but I don't get the right correction to fix this. By the way we have Hadoop 2.5 and I use the following code to get this result:

Main Method:

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word length V1");

    // Set Classes
    job.setJarByClass(WordLength_V01.class);
    job.setMapperClass(MyMapper.class);
    // job.setCombinerClass(MyReducer.class);
    job.setReducerClass(MyReducer.class);

    // Set Output and Input Parameters
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntArrayWritable.class);

    // Number of Reducers
    job.setNumReduceTasks(1);

    // Set FileDestination
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    System.exit(job.waitForCompletion(true) ? 0 : 1);
}

Mapper:

public static class MyMapper extends Mapper<Object, Text, Text, IntWritable> {

    // Initialize Variables
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    // Map Method
    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {

        // Use Tokenizer
        StringTokenizer itr = new StringTokenizer(value.toString());

        // Select each word
        while (itr.hasMoreTokens()) {
            word.set(itr.nextToken());

            // Output Pair
            context.write(word, one);
        }
    }
}

Reducer:

public static class MyReducer extends Reducer<Text, IntWritable, Text, IntArrayWritable> {

    // Initialize Variables
    private IntWritable count = new IntWritable();
    private IntWritable length = new IntWritable();

    // Reduce Method
    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {

        // Count Words
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }

        count.set(sum);

        // Wordlength
        length.set(key.getLength());

        // Define Output
        IntWritable[] temp = new IntWritable[2];
        IntArrayWritable output = new IntArrayWritable(temp);

        temp[0] = count;
        temp[1] = length;

        // Output
        output.set(temp);
        context.write(key, new IntArrayWritable(output.get()));
    }
}

SubClass

public static class IntArrayWritable extends ArrayWritable {
    public IntArrayWritable(IntWritable[] intWritables) {
        super(IntWritable.class);
    }

    @Override
    public IntWritable[] get() {
        return (IntWritable[]) super.get();
    }

    @Override
    public void write(DataOutput arg0) throws IOException {
        for(IntWritable data : get()){
            data.write(arg0);
        }
    }
}   

I used the following links to find a solution:

  • Interface Writable (hadoop.apache.org)
  • Class ArrayWritable (hadoop.apache.org)
  • stackoverflow.com (1)
  • stackoverflow.com (2)

I'm really thankful for any idea!

-------- Solution --------

New SubClass:

public static class IntArrayWritable extends ArrayWritable {

    public IntArrayWritable(IntWritable[] values) {
        super(IntWritable.class, values);
    }

    @Override
    public IntWritable[] get() {
        return (IntWritable[]) super.get();
    }

    @Override
    public String toString() {
        IntWritable[] values = get();
        return values[0].toString() + ", " + values[1].toString();
    }
}

New Reduce Method:

public void reduce(Text key, Iterable<IntWritable> values,
            Context context) throws IOException, InterruptedException {

        // Count Words
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }

        count.set(sum);

        // Wordlength
        length.set(key.getLength());

        // Define Output
        IntWritable[] temp = new IntWritable[2];
        temp[0] = count;
        temp[1] = length;

        context.write(key, new IntArrayWritable(temp));
}

回答1:


Everything looks perfect. Just you need to write one more method printStrings() in your subclass that returns a string instead of array . In built toString() will return array of strings thats the reason it is giving address in your output instead of values.

public String printStrings() {
     String strings = "";
        for (int i = 0; i < values.length; i++) {
         strings = strings + " "+ values[i].toString();
       }
      return strings;
    }


来源:https://stackoverflow.com/questions/28914596/mapreduce-output-arraywritable

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!