问题
I'm running Spark in a standalone cluster where spark master, worker and submit each run in there own Docker container.
When spark-submit
my Java App with the --repositories
and --packages
options I can see that it successfully downloads the apps required dependencies. However the stderr
logs on the spark workers web ui reports a java.lang.ClassNotFoundException: kafka.serializer.StringDecoder
. This class is available in one of the dependencies downloaded by spark-submit
. But doesn't look like it's available on the worker classpath??
16/02/22 16:17:09 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.lang.NoClassDefFoundError: kafka/serializer/StringDecoder
at com.my.spark.app.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:71)
... 6 more
Caused by: java.lang.ClassNotFoundException: kafka.serializer.StringDecoder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
The spark-submit
call:
${SPARK_HOME}/bin/spark-submit --deploy-mode cluster \
--master spark://spark-master:7077 \
--repositories https://oss.sonatype.org/content/groups/public/ \
--packages org.apache.spark:spark-streaming-kafka_2.10:1.6.0,org.elasticsearch:elasticsearch-spark_2.10:2.2.0 \
--class com.my.spark.app.JavaDirectKafkaWordCount \
/app/spark-app.jar kafka-server:9092 mytopic
回答1:
I was working with Spark 2.4.0 when I ran into this problem. I don't have a solution yet but just some observations based on experimentation and reading around for solutions. I am noting them down here just in case it helps some one in their investigation. I will update this answer if I find more information later.
- The
--repositories
option is required only if some custom repository has to be referenced - By default the maven central repository is used if the
--repositories
option is not provided - When
--packages
option is specified, the submit operation tries to look for the packages and their dependencies in the~/.ivy2/cache
,~/.ivy2/jars
,~/.m2/repository
directories. - If they are not found, then they are downloaded from maven central using ivy and stored under the
~/.ivy2
directory.
In my case I had observed that
spark-shell
worked perfectly with the--packages
optionspark-submit
would fail to do the same. It would download the dependencies correctly but fail to pass on the jars to the driver and worker nodesspark-submit
worked with the--packages
option if I ran the driver locally using--deploy-mode client
instead of cluster.- This would run the driver locally in the command shell where I ran the spark-submit command but the worker would run on the cluster with the appropriate dependency jars
I found the following discussion useful but I still have to nail down this problem. https://github.com/databricks/spark-redshift/issues/244#issuecomment-347082455
Most people just use an UBER jar to avoid running into this problem and even to avoid the problem of conflicting jar versions where a different version of the same dependency jar is provided by the platform.
But I don't like that idea beyond a stop gap arrangement and am still looking for a solution.
来源:https://stackoverflow.com/questions/35559010/spark-submit-classpath-issue-with-repositories-packages-options