What are the likely causes of org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle?

前端 未结 2 1662
独厮守ぢ
独厮守ぢ 2021-02-04 00:08

I\'m deploying a Spark data processing job on an EC2 cluster, the job is small for the cluster (16 cores with 120G RAM in total), the largest RDD has only 76k+ rows. But heavily

2条回答
  •  半阙折子戏
    2021-02-04 00:50

    Check your log if you get an error similar to this.

    ERROR 2015-05-12 17:29:16,984 Logging.scala:75 - Lost executor 13 on node-xzy: remote Akka client disassociated
    

    Every time you get this error is because you lose an executor. As why you lost an executor, that is another story, again check your log for clues.

    One thing Yarn can kill your job, if it thinks that see you are using "too much memory"

    Check for something like this:

    org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl  - Container [] is running beyond physical memory limits. Current usage: 18.0 GB of 18 GB physical memory used; 19.4 GB of 37.8 GB virtual memory used. Killing container.
    

    Also see: http://apache-spark-developers-list.1001551.n3.nabble.com/Lost-executor-on-YARN-ALS-iterations-td7916.html

    The current state of the art is to increase spark.yarn.executor.memoryOverhead until the job stops failing. We do have plans to try to automatically scale this based on the amount of memory requested, but it will still just be a heuristic.

提交回复
热议问题