how to specify the partition for mapPartition in spark

为君一笑 提交于 2019-12-12 02:24:46

问题


What I would like to do is compute each list separately so for example if I have 5 list ([1,2,3,4,5,6],[2,3,4,5,6],[3,4,5,6],[4,5,6],[5,6]) and I would like to get the 5 lists without the 6 I would do something like :

data=[1,2,3,4,5,6]+[2,3,4,5,6,7]+[3,4,5,6,7,8]+[4,5,6,7,8,9]+[5,6,7,8,9,10]

def function_1(iter_listoflist):
    final_iterator=[]
    for sublist in iter_listoflist:
        final_iterator.append([x for x in sublist if x!=6])
    return iter(final_iterator)  

sc.parallelize(data,5).glom().mapPartitions(function_1).collect()

then cut the lists so I get the first lists again. Is there a way to simply separate the computation? I don't want the lists to mix and they might be of different sizes.

thank you

Philippe


回答1:


As far as I understand your intentions all you need here is to keep individual lists separate when you parallelize your data:

data = [[1,2,3,4,5,6], [2,3,4,5,6,7], [3,4,5,6,7,8],
    [4,5,6,7,8,9], [5,6,7,8,9,10]]

rdd = sc.parallelize(data)

rdd.take(1) # A single element of a RDD is a whole list
## [[1, 2, 3, 4, 5, 6]]

Now you can simply map using a function of your choice:

def drop_six(xs):
    return [x for x in xs if x != 6]

rdd.map(drop_six).take(3)
## [[1, 2, 3, 4, 5], [2, 3, 4, 5, 7], [3, 4, 5, 7, 8]]


来源:https://stackoverflow.com/questions/33562318/how-to-specify-the-partition-for-mappartition-in-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!