Spark ignores SPARK_WORKER_MEMORY?

前端 未结 4 971
梦如初夏
梦如初夏 2021-01-16 16:29

I\'m using standalone cluster mode, 1.5.2.

Even though I\'m setting SPARK_WORKER_MEMORY in spark-env.sh, it looks like this setting is igno

4条回答
  •  佛祖请我去吃肉
    2021-01-16 17:09

    This is my configuration on cluster mode, on spark-default.conf

    spark.driver.memory 5g
    spark.executor.memory   6g
    spark.executor.cores    4
    

    Did have something like this?

    If you don't add this code (with your options) Spark executor will get 1gb of Ram as default.

    Otherwise you can add these options on ./spark-submit like this :

    # Run on a YARN cluster
    export HADOOP_CONF_DIR=XXX
    ./bin/spark-submit \
      --class org.apache.spark.examples.SparkPi \
      --master yarn \
      --deploy-mode cluster \  # can be client for client mode
      --executor-memory 20G \
      --num-executors 50 \
      /path/to/examples.jar \
      1000
    

    Try to check on master(ip/name of master):8080 when you run an application if resources have been allocated correctly.

提交回复
热议问题