I am trying to setup a Spark standalone cluster following the official documentation.
My master is on a local vm running ubuntu and I also have one worker running in
I encounter the exact same problem as you and just figure out how to get it to work.
The problem is that your spark master is listening on hostname, in your example spark, which causes the worker on the same host being able to register successfully but failed from another machine with command start-slave.sh spark://spark:7077.
The solution is to make sure the value SPARK_MASTER_IP is specified with ip in file conf/spark-env.sh
SPARK_MASTER_IP=
on your master node, and start your spark master as normal. You can open your web GUI to make sure your spark master appears as spark://YOUR_HOST_IP:7077 after the start. Then, on another machine with command start-slave.sh spark://
should start and register worker to master successfully.
Hope it would help you