Pyspark RDD ReduceByKey Multiple function

后端 未结 1 1826
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-12-10 21:18

I have a PySpark DataFrame named DF with (K,V) pairs. I would like to apply multiple functions with ReduceByKey. For example, I have following three simple functions:

<
相关标签:
1条回答
  • 2020-12-10 21:49

    If input is a DataFrame just use agg:

    import pyspark.sql.functions as sqlf
    
    df = sc.parallelize([
       ("foo", 1.0), ("foo", 2.5), ("bar", -1.0), ("bar", 99.0)
    ]).toDF(["k", "v"])
    
    df.groupBy("k").agg(sqlf.min("v"), sqlf.max("v"), sqlf.sum("v")).show()
    
    ## +---+------+------+------+
    ## |  k|min(v)|max(v)|sum(v)|
    ## +---+------+------+------+
    ## |bar|  -1.0|  99.0|  98.0|
    ## |foo|   1.0|   2.5|   3.5|
    ## +---+------+------+------+
    

    With RDDs you can use statcounter:

    from pyspark.statcounter import StatCounter
    
    rdd = df.rdd
    stats = rdd.aggregateByKey(
        StatCounter(), StatCounter.merge, StatCounter.mergeStats
    ).mapValues(lambda s: (s.min(), s.max(), s.sum()))
    
    stats.collect()
    ## [('bar', (-1.0, 99.0, 98.0)), ('foo', (1.0, 2.5, 3.5))]
    

    Using your functions you could do something like this:

    def apply(x, y, funs=[minFunc, maxFunc, sumFunc]):
        return [f(x_, y_) for f, x_, y_ in zip(*(funs, x, y))]
    
    rdd.combineByKey(lambda x: (x, x, x), apply, apply).collect()
    ## [('bar', [-1.0, 99.0, 98.0]), ('foo', [1.0, 2.5, 3.5])]
    
    0 讨论(0)
提交回复
热议问题