Calculating the averages for each KEY in a Pairwise (K,V) RDD in Spark with Python

后端 未结 4 1493
温柔的废话
温柔的废话 2020-11-28 21:31

I want to share this particular Apache Spark with Python solution because documentation for it is quite poor.

I wanted to calculate the average value of K/V pairs (s

4条回答
  •  攒了一身酷
    2020-11-28 21:31

    Just adding a note about an intuitive and shorter (but a bad) solution to this problem. The book Sam's Teach Yourself Apache Spark in 24 Hours has explained this problem well in the last chapter.

    Using groupByKey one can solve the problem easily like this:

    rdd = sc.parallelize([
            (u'2013-10-09', 10),
            (u'2013-10-09', 10),
            (u'2013-10-09', 13),
            (u'2013-10-10', 40),
            (u'2013-10-10', 45),
            (u'2013-10-10', 50)
        ])
    
    rdd \
    .groupByKey() \
    .mapValues(lambda x: sum(x) / len(x)) \
    .collect()
    

    Output:

    [('2013-10-10', 45.0), ('2013-10-09', 11.0)]
    

    This is intuitive and appealing, but don't use it! groupByKey does not do any combining on the mappers and brings all the individual key value pairs to the reducer.

    Avoid groupByKey as much as possible. Go with the reduceByKey solution like @pat's.

提交回复
热议问题