Apache Spark: setting executor instances does not change the executors
问题 I have an Apache Spark application running on a YARN cluster (spark has 3 nodes on this cluster) on cluster mode. When the application is running the Spark-UI shows that 2 executors (each running on a different node) and the driver are running on the third node. I want the application to use more executors so I tried adding the argument --num-executors to Spark-submit and set it to 6. spark-submit --driver-memory 3G --num-executors 6 --class main.Application --executor-memory 11G --master