I wrote a class that gets a DataFrame, does some calculations on it and can export the results. The Dataframes are generated by a List of Keys. I know that i am doing this i
You can use scala's parallel collections to achieve foreach parallelism on the driver side.
val l = List(34, 32, 132, 352).par
l.foreach{i => // your code to be run in parallel for each i}
*However, a word of caution: is your cluster capable of running jobs parallely? You may submit the jobs to your spark cluster parallely but they may end up getting queued on the cluster and get executed sequentially.