Control is not going to the reducer in hadoop

微笑、不失礼 提交于 2019-12-24 11:11:00

问题


I have written a custom inputformat and data type in hadoop, which can read images, store it into RGB array. but when I implement in my map and reduce function, the control does not go to the reducer function.

import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class Image {

    public static class Map extends Mapper<Text, ImageM, Text, ImageM> {

        public void map(Text key, ImageM value, Context context) throws IOException,     
        InterruptedException {
          /*
           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(" "+value.Blue[i][j]);
               }
           }       
           */
           context.write(key, value);


        } 
    }

    public static class Reduce extends Reducer<Text, ImageM, Text, IntWritable> {

        public void reduce(Text key, ImageM value, Context context) 
         throws IOException, InterruptedException {

           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(value.Blue[i][j]+" ");
               }
           }
           IntWritable m = new IntWritable(10);
           context.write(key, m);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

        Job job = new Job(conf, "wordcount");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(ImageM.class);

        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        job.setInputFormatClass(ImageFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        long start = new Date().getTime();    
        job.waitForCompletion(true);
        long end = new Date().getTime();
        System.out.println("Job took "+(end-start) + " milliseconds");
    }

}

Here the key in the map function gives the file name according to the input format.

I get the output as "icon2.gif ImageM@31093d14"

Every thing is fine if my data type is used only in the mapper. Can u guess where is the problem?


回答1:


Your reduce function signature is wrong. It should be:

@Override
public void reduce(Text key, Iterable<ImageM> values, Context context) 
     throws IOException, InterruptedException

Please use the @Override annotation to let the compiler spot this error for you.



来源:https://stackoverflow.com/questions/24583029/control-is-not-going-to-the-reducer-in-hadoop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!