Why does vcore always equal the number of nodes in Spark on YARN?

前端 未结 3 1100
醉话见心
醉话见心 2020-12-28 15:58

I have a Hadoop cluster with 5 nodes, each of which has 12 cores with 32GB memory. I use YARN as MapReduce framework, so I have the following settings with YARN:

    <
3条回答
  •  青春惊慌失措
    2020-12-28 16:46

    I was wondering the same but changing the resource-calculator worked for me.
    This is how I set the property:

        
            yarn.scheduler.capacity.resource-calculator      
            org.apache.hadoop.yarn.util.resource.DominantResourceCalculator       
        
    

    Check in the YARN UI in the application how many containers and vcores are assigned, with the change the number of containers should be executors+1 and the vcores should be: (executor-cores*num-executors) +1.

提交回复
热议问题