Which function in spark is used to combine two RDDs by keys

大兔子大兔子 提交于 2019-11-30 21:52:09

问题


Let us say I have the following two RDDs, with the following key-pair values.

rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

and

rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

Now, I want to join them by key values, so for example I want to return the following

ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3, value4, value7]) ] 

How I can I do this, in spark using Python or Scala? One way is to use join, but join would create a tuple inside the tuple. But I want to only have one tuple per key value pair.


回答1:


I would union the two RDDs and to a reduceByKey to merge the values.

(rdd1 union rdd2).reduceByKey(_ ++ _)



回答2:


Just use join and then map the resulting rdd.

rdd1.join(rdd2).map(case (k, (ls, rs)) => (k, ls ++ rs))


来源:https://stackoverflow.com/questions/26908031/which-function-in-spark-is-used-to-combine-two-rdds-by-keys

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!