I\'m trying to run the spark examples from Eclipse
and getting this generic error: Initial job has not accepted any resources; check your cluster UI to en
If you try to run your application with IDE, and you have free resources on your workers, you need to do this:
1) Before all, configure workers and master spark nodes.
2) Specify driver(PC) configuration to return calculation value from workers.
SparkConf conf = new SparkConf()
.setAppName("Test spark")
.setMaster("spark://ip of your master node:port of your master node")
.set("spark.blockManager.port", "10025")
.set("spark.driver.blockManager.port", "10026")
.set("spark.driver.port", "10027") //make all communication ports static (not necessary if you disabled firewalls, or if your nodes located in local network, otherwise you must open this ports in firewall settings)
.set("spark.cores.max", "12")
.set("spark.executor.memory", "2g")
.set("spark.driver.host", "ip of your driver (PC)"); //(necessary)
The error indicates that you cluster has insufficient resources for current job.Since you have not started the slaves i.e worker . The cluster won't have any resources to allocate to your job. Starting the slaves will work.
`start-slave.sh <spark://master-ip:7077>`
Solution to your Answer
Reason
Fix
Conclusion
Alternate-way
Try using "spark://127.0.0.1:7077" as a master address instead of *.local name. Sometime java is not able to resolve .local addresses - for reasons I don't understand.
I had the same problem, and it was because the workers could not communicate with the driver.
You need to set spark.driver.port
(and open said port on your driver), spark.driver.host
and spark.driver.bindAddress
in your spark-submit
from the driver.