Spark: Mapping elements of an RDD using other elements from the same RDD

对着背影说爱祢 提交于 2019-12-11 15:24:16

问题


Suppose I have an this rdd:

val r = sc.parallelize(Array(1,4,2,3))

What I want to do is create a mapping. e.g:

r.map(val => val + func(all other elements in r)).

Is this even possible?


回答1:


It's very likely that you will get an exception, e.g. bellow.

rdd = sc.parallelize(range(100))
rdd = rdd.map(lambda x: x + sum(rdd.collect()))

i.e. you are trying to broadcast the RDD therefore.

Exception: It appears that you are attempting to broadcast an RDD or reference an RDD from an action or transformation. RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(lambda x: rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.

To achieve this you would have to do something like this:

res = sc.broadcast(rdd.reduce(lambda a,b: a + b))
rdd = rdd.map(lambda x: x + res.value)



回答2:


Spark already supports Gradient Descent. Maybe you can take a look in how they implemented it.




回答3:


I don't know if there is a more efficient alternative, but I would first create some structure like:

rdd = sc.parallelize([ (1, [4,2,3]), (4, [1,2,3]), (2, [1,4,3]), (3, [1,4,2]));
rdd = rdd.map(lambda (x,y) => x + func(y));


来源:https://stackoverflow.com/questions/34337528/spark-mapping-elements-of-an-rdd-using-other-elements-from-the-same-rdd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!