I have a Spark job running on a distributed Spark Cluster with 10 nodes. I am doing some text file processing on HDFS. The job runs fine, until the last ste
I met the same problem before, then realize if you choose running on standalone, then the driver will be run by user, and executor processes are run by root. The only change you need is:
Firstly, sbt package to create jar file, notice that you may better run sbt package by user not by root. I have tried to sbt package by root (sudo), then the assembly jar file will be created in somewhere else.
After you have a assembly jar file, then doing spark submit by "sudo".
sudo /opt/spark-2.0/bin/spark-submit \
--class ...
--master ..
...