According to the Spark on Mesos docs one needs to set the spark.executor.uri pointing to a Spark distribution:
val conf = new SparkConf()
.set
After I discovered the Spark JobServer project, I decided that this is the most suitable one for my use case.
It supports dynamic context creation via a REST API, as well as adding JARs to the newly created context manually/programmatically. It also is capable of runnign low-latency synchronous jobs, which is exactly what I need.
I created a Dockerfile so you can try it out with the most recent (supported) versions of Spark (1.4.1), Spark JobServer (0.6.0) and buit-in Mesos support (0.24.1):
References: