Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

后端 未结 4 1767
情话喂你
情话喂你 2020-12-17 09:22

I am trying to run a map/reducer in java. Below are my files

WordCount.java

package counter;


public class          


        
相关标签:
4条回答
  • 2020-12-17 09:29

    Add these 2 lines in your code :

    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);
    

    You are using TextOutputFormat which emits LongWritable key and Text value by default, but you are emitting Text as key and IntWritable as value. You need to tell this to the famework.

    HTH

    0 讨论(0)
  • 2020-12-17 09:38

    Removing this from Code solved the issue

    super.map(key, value, context);
    
    0 讨论(0)
  • 2020-12-17 09:44

    This may not be your issue but I had this silly issue once. Make sure you are not mixing the old and the new libraries i.e. mapred vs mapreduce. Annotate @Overide on your map and reduce methods. If you see errors you are not properly overriding the methods.

    0 讨论(0)
  • 2020-12-17 09:51

    I got a similar exception stack trace due to improper Mapper Class set in my code (typo :) )

    job.setMapperClass(Mapper.class)  // Set to org.apache.hadoop.mapreduce.Mapper due to type
    

    Notice that mistakenly I was using Mapper class from mapreduce package, I changed it to my custom mapper class:

    job.setMapperClass(LogProcMapperClass.class) // LogProcMapperClass is my custom mapper.
    

    The exception is resolved after I corrected the mapper class.

    0 讨论(0)
提交回复
热议问题