Apache Hadoop Yarn - Underutilization of cores

前端 未结 2 1053
庸人自扰
庸人自扰 2020-11-30 23:06

No matter how much I tinker with the settings in yarn-site.xml i.e using all of the below options

yarn.scheduler.minimum-allocation-vcores
yarn.         


        
2条回答
  •  無奈伤痛
    2020-11-30 23:45

    The problem lies not with yarn-site.xml or spark-defaults.conf but actually with the resource calculator that assigns the cores to the executors or in the case of MapReduce jobs, to the Mappers/Reducers.

    The default resource calculator i.e org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator uses only memory information for allocating containers and CPU scheduling is not enabled by default. To use both memory as well as the CPU, the resource calculator needs to be changed to org.apache.hadoop.yarn.util.resource.DominantResourceCalculator in the capacity-scheduler.xml file.

    Here's what needs to change.

    
        yarn.scheduler.capacity.resource-calculator
        org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
    
    

提交回复
热议问题