Spark when union a lot of RDD throws stack overflow error

匿名 (未验证) 提交于 2019-12-03 01:23:02

问题:

When I use "++" to combine a lot of RDDs, I got error stack over flow error.

Spark version 1.3.1 Environment: yarn-client. --driver-memory 8G

The number of RDDs is more than 4000. Each RDD is read from a text file with size of 1 GB.

It is generated in this way

val collection = (for (   path 

It works fine when files has small size. And there is the error

The error repeats itself. I guess it is a recursion function which is called too many time?

 Exception at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)     at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)     at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)     at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)     at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)     at scala.collection.AbstractTraversable.map(Traversable.scala:105)     at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)     at scala.Option.getOrElse(Option.scala:120)     at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)     at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)     at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)     at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)     at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)     at scala.collection.AbstractTraversable.map(Traversable.scala:105)     at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)     at scala.Option.getOrElse(Option.scala:120)   ..... 

回答1:

Use SparkContext.union(...) instead to union many RDDs at once.

You don't want to do it one at a time like that since RDD.union() creates a new step in the lineage (an extra set of stack frames on any computation) for each RDD, whereas SparkContext.union() makes it all at once. This will insure not getting a stack-overflow error.



回答2:

It seems that when union RDD one by one can get into a series of very long recursive function calls. In this case we need to increase JVM stack memory. In spark with option --driver-java-options "-Xss 100M", driver jvm stack memory is configured to 100M.

Sean Owen's solution also solves the problem in more elegant way.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!