How to change memory per node for apache spark worker

前端 未结 5 1926
别跟我提以往
别跟我提以往 2020-12-02 18:50

I am configuring an Apache Spark cluster.

When I run the cluster with 1 master and 3 slaves, I see this on the master monitor page:

Memory
2.0 GB (51         


        
5条回答
  •  抹茶落季
    2020-12-02 19:18

    According to Spark documentation you can change the Memory per Node with command line argument --executor-memory while submitting your application. E.g.

    ./bin/spark-submit \
      --class org.apache.spark.examples.SparkPi \
      --master spark://master.node:7077 \
      --executor-memory 8G \
      --total-executor-cores 100 \
      /path/to/examples.jar \
      1000
    

    I've tested and it works.

提交回复
热议问题