org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.lookupTimeout

后端 未结 2 1398
渐次进展
渐次进展 2020-12-31 21:59

Getting the below error with respect to the container while submitting an spark application to YARN. The HADOOP(2.7.3)/SPARK (2.1) environment is running a pseudo-distribute

2条回答
  •  陌清茗
    陌清茗 (楼主)
    2020-12-31 22:16

    You can keep increasing spark.network.timeout until you stop seeing the problem , as mentioned by himanshuIIITian in comment.
    When spark is under heavy workload, timeout exception can occur. If you have low executor memory then GC may keep system very busy which increases workload. Look into the logs if there is Out Of Memory error. Please enable -XX:+PrintGCDetails -XX:+PrintGCTimeStamps in spark.executor.extraJavaOptions and look into logs if full GC is invoked a number of times before a task completes. If that is the case then increase your executorMemory . That should hopefully solve your problem.

提交回复
热议问题