spark: What is the difference between Aggregator and UDAF?

∥☆過路亽.° 提交于 2019-12-06 11:27:34

问题


In Spark's documentation, Aggregator:

abstract class Aggregator[-IN, BUF, OUT] extends Serializable

A base class for user-defined aggregations, which can be used in Dataset operations to take all of the elements of a group and reduce them to a single value.

UserDefinedAggregateFunction is:

abstract class UserDefinedAggregateFunction extends Serializable

The base class for implementing user-defined aggregate functions (UDAF).

According to Dataset Aggregator - Databricks, “an Aggregator is similar to a UDAF, but the interface is expressed in terms of JVM objects instead of as a Row .”

It seems these two classes are very similar, what are other differences apart from the types in the interface?

A similar question is: Performance of UDAF versus Aggregator in Spark


回答1:


A fundamental difference, apart from types, is external interface:

  • Aggregator takes a complete Row (it is intended for "strongly" typed API).
  • UserDefinedAggregationFunction takes a set of Columns.

This makes Aggregator less flexible, although overall API is far more user friendly.

There is also a difference with handling state:

  • Aggregator is stateful. Depends on mutable internal state of its buffer field.
  • UserDefinedAggregateFunction is stateless. State of the buffer is external.


来源:https://stackoverflow.com/questions/48180598/spark-what-is-the-difference-between-aggregator-and-udaf

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!