How to group by one column in rdd in pyspark?

时间秒杀一切 提交于 2020-07-09 04:43:49

问题


The rdd in pyspark are consist of four elements in every list :

[id1, 'aaa',12,87]
[id2, 'acx',1,90]
[id3, 'bbb',77,10]
[id2, 'bbb',77,10]
.....

I want to group by the ids in the first columns, and get the aggregate result of the other three columns: for example => [id2,[['acx',1,90], ['bbb',77,10]...]] How can I realize it ?


回答1:


spark.version
# u'2.2.0'

rdd = sc.parallelize((['id1', 'aaa',12,87],
                      ['id2', 'acx',1,90],
                      ['id3', 'bbb',77,10],
                      ['id2', 'bbb',77,10]))

rdd.map(lambda x: (x[0], x[1:])).groupByKey().mapValues(list).collect()

# result:

[('id2', [['acx', 1, 90], ['bbb', 77, 10]]), 
 ('id3', [['bbb', 77, 10]]), 
 ('id1', [['aaa', 12, 87]])]

or, if you prefer lists strictly, you can add one more map operation after mapValues:

rdd.map(lambda x: (x[0], x[1:])).groupByKey().mapValues(list).map(lambda x: list(x)).collect()

# result:

[['id2', [['acx', 1, 90], ['bbb', 77, 10]]], 
 ['id3', [['bbb', 77, 10]]],
 ['id1', [['aaa', 12, 87]]]]


来源:https://stackoverflow.com/questions/46930791/how-to-group-by-one-column-in-rdd-in-pyspark

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!