API call made to submit the Job. Response states - It is Running
On Cluster UI -
Worker (slave) - worker-20160712083825-172.31.17.189-59433 i
You can take a look at my answer in a similar question Apache Spark on Mesos: Initial job has not accepted any resources:
While most of other answers focuses on resource allocation (cores, memory) on spark slaves, I would like to highlight that firewall could cause exactly the same issue, especially when you are running spark on cloud platforms.
If you can find spark slaves in the web UI, you have probably opened the standard ports 8080, 8081, 7077, 4040. Nonetheless, when you actually run a job, it uses SPARK_WORKER_PORT, spark.driver.port and spark.blockManager.port which by default are randomly assigned. If your firewall is blocking these ports, the master could not retrieve any job-specific response from slaves and return the error.
You can run a quick test by opening all the ports and see whether the slave accepts jobs.