How jobs are assigned to executors in Spark Streaming?
Let's say I've got 2 or more executors in a Spark Streaming application. I've set the batch time of 10 seconds, so a job is started every 10 seconds reading input from my HDFS. If the every job lasts for more than 10 seconds, the new job that is started is assigned to a free executor right? Even if the previous one didn't finish? I know it seems like a obvious answer but I haven't found anything about job scheduling in the website or on the paper related to Spark Streaming. If you know some links where all of those things are explained, I would really appreciate to see them. Thank you.