Pyspark RDD ReduceByKey Multiple function

无人久伴 提交于 2019-11-28 11:47:44

If input is a DataFrame just use agg:

import pyspark.sql.functions as sqlf

df = sc.parallelize([
   ("foo", 1.0), ("foo", 2.5), ("bar", -1.0), ("bar", 99.0)
]).toDF(["k", "v"])

df.groupBy("k").agg(sqlf.min("v"), sqlf.max("v"), sqlf.sum("v")).show()

## +---+------+------+------+
## |  k|min(v)|max(v)|sum(v)|
## +---+------+------+------+
## |bar|  -1.0|  99.0|  98.0|
## |foo|   1.0|   2.5|   3.5|
## +---+------+------+------+

With RDDs you can use statcounter:

from pyspark.statcounter import StatCounter

rdd = df.rdd
stats = rdd.aggregateByKey(
    StatCounter(), StatCounter.merge, StatCounter.mergeStats
).mapValues(lambda s: (s.min(), s.max(), s.sum()))

stats.collect()
## [('bar', (-1.0, 99.0, 98.0)), ('foo', (1.0, 2.5, 3.5))]

Using your functions you could do something like this:

def apply(x, y, funs=[minFunc, maxFunc, sumFunc]):
    return [f(x_, y_) for f, x_, y_ in zip(*(funs, x, y))]

rdd.combineByKey(lambda x: (x, x, x), apply, apply).collect()
## [('bar', [-1.0, 99.0, 98.0]), ('foo', [1.0, 2.5, 3.5])]
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!