TaskID.<init>(Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/mapreduce/TaskType;I)V

…衆ロ難τιáo~ 提交于 2019-12-10 11:06:46

问题


val jobConf = new JobConf(hbaseConf)  
jobConf.setOutputFormat(classOf[TableOutputFormat])  
jobConf.set(TableOutputFormat.OUTPUT_TABLE, tablename)  

val indataRDD = sc.makeRDD(Array("1,jack,15","2,Lily,16","3,mike,16"))  

indataRDD.map(_.split(','))   
val rdd = indataRDD.map(_.split(',')).map{arr=>{  
val put = new Put(Bytes.toBytes(arr(0).toInt))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("name"),Bytes.toBytes(arr(1)))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("age"),Bytes.toBytes(arr(2).toInt))  
(new ImmutableBytesWritable, put)   
}}  
  rdd.saveAsHadoopDataset(jobConf)  

When I run hadoop or spark jobs, I often meet the error:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.TaskID.<init>(Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/mapreduce/TaskType;I)V
at org.apache.spark.SparkHadoopWriter.setIDs(SparkHadoopWriter.scala:158)
at org.apache.spark.SparkHadoopWriter.preSetup(SparkHadoopWriter.scala:60)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1188)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161)
at com.iteblog.App$.main(App.scala:62)
at com.iteblog.App.main(App.scala)`

At the begin, I think, is the jar conflict, but I carefully checked the jar: there are no other jars. The spark and hadoop versions are:

<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.1</version>`

<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>2.6.0-mr1-cdh5.5.0</version>

And I found that the TaskID and TaskType are all in the hadoop-core jar, but not in the same package. Why the mapred.TaskID can refer the mapreduce.TaskType ?


回答1:


I have also faced such issue . It basically due to jar issue only.

Add the Jar file from Maven spark-core_2.10

 <dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>2.0.2</version>
 </dependency>

After changing the Jar file




回答2:


Oh,I have resolve this problem,add the maven dependency

 <dependency>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-mapreduce-client-core</artifactId>
   <version>2.6.0-cdh5.5.0</version>
</dependency>

the error disappear!



来源:https://stackoverflow.com/questions/41055923/taskid-initlorg-apache-hadoop-mapreduce-jobidlorg-apache-hadoop-mapreduce-ta

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!