Getting null pointer exception when running saveAsNewAPIHadoopDataset in scala spark2 to hbase

两盒软妹~` 提交于 2020-07-30 08:01:26

问题


I am saving a puts RDD to Hbase using saveAsNewAPIHadoopDataset. Below is my job creation and submition.

    val outputTableName = "test3"
    val conf2 = HBaseConfiguration.create()
    conf2.set("hbase.zookeeper.quorum", "xx.xx.xx.xx")
    conf2.set("hbase.mapred.outputtable", outputTableName)
    conf2.set("mapreduce.outputformat.class", "org.apache.hadoop.hbase.mapreduce.TableOutputFormat")

    val job = createJob(outputTableName, conf2)
    val outputTable = sc.broadcast(outputTableName)
    val hbasePuts = simpleRdd.map(k => convertToPut(k, outputTable))

    hbasePuts.saveAsNewAPIHadoopDataset(job.getConfiguration)

This is my job creation function

def createJob(table: String, conf: Configuration): Job = {
    conf.set(TableOutputFormat.OUTPUT_TABLE, table)
    val job = Job.getInstance(conf, this.getClass.getName.split('$')(0))
    job.setOutputFormatClass(classOf[TableOutputFormat[String]])
    job
  }

This function converts data in Hbase format

def convertToPut(k: (String, String, String), outputTable: Broadcast[String]): (ImmutableBytesWritable, Put) = {
    val rowkey = k._1
    val put = new Put(Bytes.toBytes(rowkey))
    val one = Bytes.toBytes("cf1")
    val two = Bytes.toBytes("cf2")

    put.addColumn(one, Bytes.toBytes("a"), Bytes.toBytes(k._2))
    put.addColumn(two, Bytes.toBytes("a"), Bytes.toBytes(k._3))
    (new ImmutableBytesWritable(Bytes.toBytes(outputTable.value)), put)
  }

This is the error i am getting at line 125 which is :hbasePuts.saveAsNewAPIHadoopDataset(job.getConfiguration)

Exception in thread "main" java.lang.NullPointerException
    at org.apache.hadoop.hbase.security.UserProvider.instantiate(UserProvider.java:122)
    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:214)
    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
    at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.checkOutputSpecs(TableOutputFormat.java:177)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1099)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085)
    at ScalaSpark$.main(ScalaSpark.scala:125)
    at ScalaSpark.main(ScalaSpark.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

回答1:


I have encountered the same problem. I think there is a bug in org.apache.hadoop.hbase.mapreduce.TableOutputFormat class.

TableOutputFormat original code is below:

public void checkOutputSpecs(JobContext context) throws IOException,
        InterruptedException {

    try (Admin admin = ConnectionFactory.createConnection(getConf()).getAdmin()) {
        TableName tableName = TableName.valueOf(this.conf.get(OUTPUT_TABLE));
        if (!admin.tableExists(tableName)) {
            throw new TableNotFoundException("Can't write, table does not exist:" +
                    tableName.getNameAsString());
        }

        if (!admin.isTableEnabled(tableName)) {
            throw new TableNotEnabledException("Can't write, table is not enabled: " +
                    tableName.getNameAsString());
        }
    }
}

If I fix it as below:

public void checkOutputSpecs(JobContext context) throws IOException,
        InterruptedException {

    //set conf by context parameter
    setConf(context.getConfiguration());

    try (Admin admin = ConnectionFactory.createConnection(getConf()).getAdmin()) {
        TableName tableName = TableName.valueOf(this.conf.get(OUTPUT_TABLE));
        if (!admin.tableExists(tableName)) {
            throw new TableNotFoundException("Can't write, table does not exist:" +
                    tableName.getNameAsString());
        }

        if (!admin.isTableEnabled(tableName)) {
            throw new TableNotEnabledException("Can't write, table is not enabled: " +
                    tableName.getNameAsString());
        }
    }
}

My problem is resolved.

spark.hadoop.validateOutputSpecs

Another solution is to turn spark.hadoop.validateOutputSpecs off when creating SparkSession.

val session = SparkSession.builder()
  .config("spark.hadoop.validateOutputSpecs", false)
  .getOrCreate()


来源:https://stackoverflow.com/questions/50925942/getting-null-pointer-exception-when-running-saveasnewapihadoopdataset-in-scala-s

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!