Spark Is there any rule of thumb about the optimal number of partition of a RDD and its number of elements?

拟墨画扇 提交于 2019-12-21 11:22:51

问题


Is there any relationship between the number of elements an RDD contained and its ideal number of partitions ?

I have a RDD that has thousand of partitions (because I load it from a source file composed by multiple small files, that's a constraint I can't fix so I have to deal with it). I would like to repartition it (or use the coalescemethod). But I don't know in advance the exact number of events the RDD will contain.
So I would like to do it in an automated way. Something that will look like:

val numberOfElements = rdd.count()
val magicNumber = 100000
rdd.coalesce( numberOfElements / magicNumber)

Is there any rule of thumb about the optimal number of partition of a RDD and its number of elements ?

Thanks.


回答1:


There isn't, because it is highly dependent on application, resources and data. There are some hard limitations (like various 2GB limits) but the rest you have to tune on task to task basis. Some factors to consider:

  • size of a single row / element
  • cost of a typical operation. If have small partitions and operations are cheap then scheduling cost can be much higher than the cost of data processing.
  • cost of processing partition when performing partition-wise (sort for example) operations.

If the core problem here is a number of the initial files then using some variant of CombineFileInputFormat could be a better idea than repartitioning / coalescing. For example:

sc.hadoopFile(
  path,
  classOf[CombineTextInputFormat],
  classOf[LongWritable], classOf[Text]
).map(_._2.toString)

See also How to calculate the best numberOfPartitions for coalesce?




回答2:


While I'm completely agree with zero323, you still can implement some kind of heuristics. Internally we took size of data stored as avro key-value and compressed and computed number of partitions such that every partition won't be more than 64MB(totalVolume/64MB~number of partitions). Once in a while we run automatic job to recompute "optimal" number of partitions per each type of input etc. In our case it's easy to do since inputs are from hdfs(s3 will work too probably)

Once again it depends on your computation and your data, so your number might be completely different.



来源:https://stackoverflow.com/questions/36009392/spark-is-there-any-rule-of-thumb-about-the-optimal-number-of-partition-of-a-rdd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!