How to control how many executors to run in yarn-client mode?
问题 I have a Hadoop cluster of 5 nodes where Spark runs in yarn-client mode. I use --num-executors for the number of executors. The maximum number of executors I am able to get is 20. Even if I specify more, I get only 20 executors. Is there any upper limit on the number of executors that can get allocated ? Is it a configuration or the decision is made on the basis of the resources available ? 回答1: Apparently your 20 running executors consume all available memory. You can try decreasing Executor