Median / quantiles within PySpark groupBy

后端 未结 5 1078
感情败类
感情败类 2020-12-04 15:26

I would like to calculate group quantiles on a Spark dataframe (using PySpark). Either an approximate or exact result would be fine. I prefer a solution that I can use withi

5条回答
  •  甜味超标
    2020-12-04 15:53

    I guess you don't need it anymore. But will leave it here for future generations (i.e. me next week when I forget).

    from pyspark.sql import Window
    import pyspark.sql.functions as F
    
    grp_window = Window.partitionBy('grp')
    magic_percentile = F.expr('percentile_approx(val, 0.5)')
    
    df.withColumn('med_val', magic_percentile.over(grp_window))
    

    Or to address exactly your question, this also works:

    df.groupBy('grp').agg(magic_percentile.alias('med_val'))
    

    And as a bonus, you can pass an array of percentiles:

    quantiles = F.expr('percentile_approx(val, array(0.25, 0.5, 0.75))')
    

    And you'll get a list in return.

提交回复
热议问题