Spark on YARN: Less executor memory than set via spark-submit
问题 I'm using Spark in a YARN cluster (HDP 2.4) with the following settings: 1 Masternode 64 GB RAM (48 GB usable) 12 cores (8 cores usable) 5 Slavenodes 64 GB RAM (48 GB usable) each 12 cores (8 cores usable) each YARN settings memory of all containers (of one host): 48 GB minimum container size = maximum container size = 6 GB vcores in cluster = 40 (5 x 8 cores of workers) minimum #vcores/container = maximum #vcores/container = 1 When I run my spark application with the command spark-submit -