spark-submit on kubernetes cluster

五迷三道 提交于 2020-03-21 07:01:11

问题


I have created simple word count program jar file which is tested and works fine. However, when I am trying to run the same jar file on my Kubernetes cluster it's throwing an error. Below is my spark-submit code along with the error thrown.

spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-new:spark-new local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt  

The last local argument is the data file on which the wordcount program will run and fetch the results.

Below is my error:

    status: [ContainerStatus(containerID=null, image=gcr.io/spark-operator/spark:v2.4.5, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=Back-off pulling image "gcr.io/spark-operator/spark:v2.4.5", reason=ImagePullBackOff, additionalProperties={}), additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:13 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Running
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=2020-02-11T17:18:11Z, additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={started=true})]
20/02/11 22:48:19 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Failed
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, exitCode=1, finishedAt=2020-02-11T17:18:18Z, message=null, reason=Error, signal=null, startedAt=2020-02-11T17:18:11Z, additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:21 INFO LoggingPodStatusWatcherImpl: Container final statuses:


         Container name: spark-kubernetes-driver
         Container image: gcr.io/spark-operator/spark:v2.4.5
         Container state: Terminated
         Exit code: 1
20/02/11 22:48:21 INFO Client: Application WordCount finished.
20/02/11 22:48:23 INFO ShutdownHookManager: Shutdown hook called
20/02/11 22:48:23 INFO ShutdownHookManager: Deleting directory C:\Users\siddh\AppData\Local\Temp\spark-1a3ee936-d430-4f9d-976c-3305617678df

How do I resolve this error? How can I pass the local file?
NOTE: JAR files and data files are present on my desktop and not in the docker image.


回答1:


unfortunately passing local files to the job is not yet available for official release of Spark on Kubernetes. There is one solution in a Spark fork requiring to add Resource Staging Server deployment to the cluster, but it is not included in the released builds.

Why it is not so easy to support? Imagine how to configure the network communication between your machine and Spark Pods in Kubernetes: in order to pull your local jars Spark Pod should be able to access you machine (probably you need to run web-server locally and expose its endpoints), and vice-versa in order to push jar from you machine to the Spark Pod your spark-submit script needs to access Spark Pod (which can be done via Kubernetes Ingress and requires several more components to be integrated).

The solution Spark allows is to store your artefacts (jars) in the http-accessible place, including hdfs-compatible storage systems. Please refer the official docs.

Hope it helps.




回答2:


download the spark precompiled package from spark-2.4.4-bin-hadoop2.7.tgz. put your jar inside examples folder

 tree -L 1
.
├── LICENSE
├── NOTICE
├── R
├── README.md
├── RELEASE
├── bin
├── conf
├── data
├── examples  <---
├── jars
├── kubernetes
├── licenses
├── monitoring
├── python
├── sbin
└── yarn

then build a docker image.

docker build -t spark-docker:v0.1 -f -f ./kubernetes/dockerfiles/spark/Dockerfile .
docker push spark-docker:v0.1

Now run spark-submit

spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-docker:v0.1 local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt 


来源:https://stackoverflow.com/questions/60174548/spark-submit-on-kubernetes-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!