Hive Warehouse Connector + Spark = signer information does not match signer information of other classes in the same package

旧时模样 提交于 2020-01-15 19:13:56

问题


I'm trying to use hive warehouse connector and spark on hdp 3.1 and getting exception even with simplest example (below). The class causing problems: JaninoRuntimeException - is in org.codehaus.janino:janino:jar:3.0.8 (dependency of spark_sql) and in com.hortonworks.hive:hive-warehouse-connector_2.11:jar.

I've tried to exclude janino library from spark_sql, but this resulted in missing other classes from janino. And I need hwc to for the new functionality.

Anyone had same error? Any ideas how to deal with it?

I'm getting error:

Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:66)
Caused by: java.lang.SecurityException: class "org.codehaus.janino.JaninoRuntimeException"'s signer information does not match signer information of other classes in the same package
    at java.lang.ClassLoader.checkCerts(ClassLoader.java:898)
    at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:761)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:197)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:36)
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1321)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3277)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
    at Main$.main(Main.scala:15)
    at Main.main(Main.scala)
    ... 5 more

my sbt file:

name := "testHwc"

version := "0.1"

scalaVersion := "2.11.11"

resolvers += "Hortonworks repo" at "http://repo.hortonworks.com/content/repositories/releases/"

libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "3.1.1.3.1.0.0-78"

// https://mvnrepository.com/artifact/com.hortonworks.hive/hive-warehouse-connector
libraryDependencies += "com.hortonworks.hive" %% "hive-warehouse-connector" % "1.0.0.3.1.0.0-78"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.2.3.1.0.0-78"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.2.3.1.0.0-78"

And the source code:

import com.hortonworks.hwc.HiveWarehouseSession
import org.apache.spark.sql.SparkSession

object Main {
  def main(args: Array[String]): Unit = {

    val ss = SparkSession.builder()
      .config("spark.sql.hive.hiveserver2.jdbc.url", "nnn")
      .master("local[*]").getOrCreate()

    import ss.sqlContext.implicits._

    val rdd = ss.sparkContext.makeRDD(Seq(1, 2, 3, 4, 5, 6, 7))

    rdd.toDF("col1").show()
    val hive = HiveWarehouseSession.session(ss).build()
  }
}

回答1:


After some investigation I've discovered that the presence of error depends on the order of libraries in classpath.

For unknown reason when I was running this project in IntelliJ IDEA the classpath was always with random order and the app was failing and succeeding randomly.

In the end - HiveWarehouseConnector jar in classpath should be after Spark Sql jar.



来源:https://stackoverflow.com/questions/56593226/hive-warehouse-connector-spark-signer-information-does-not-match-signer-info

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!