Spark Dataset aggregation similar to RDD aggregate(zero)(accum, combiner)

偶尔善良 提交于 2019-11-28 09:37:45

问题


RDD has a very useful method aggregate that allows to accumulate with some zero value and combine that across partitions. Is there any way to do that with Dataset[T]. As far as I see the specification via Scala doc, there is actually nothing capable of doing that. Even the reduce method allows to do things only for binary operations with T as both arguments. Any reason why? And if there is anything capable of doing the same?

Thanks a lot!

VK


回答1:


There are two different classes which can be used to achieve aggregate-like behavior in Dataset API:

  • UserDefinedAggregateFunction which uses SQL types and takes Columns as an input.

    Initial value is defined using initialize method, seqOp with update method and combOp with merge method.

    Example implementation: How to define a custom aggregation function to sum a column of Vectors?

  • Aggregator which uses standard Scala types with Encoders and takes records as an input.

    Initial value is defined using zero method, seqOp with reduce method and combOp with merge method.

    Example implementation: How to find mean of grouped Vector columns in Spark SQL?

Both provide additional finalization method (evaluate and finish respectively) which is used to generate final results and can be used for both global and by-key aggregations.



来源:https://stackoverflow.com/questions/42378806/spark-dataset-aggregation-similar-to-rdd-aggregatezeroaccum-combiner

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!