repartition a dense matrix in pyspark

无人久伴 提交于 2020-01-15 07:50:27

问题


I have a Dense matrix(100*100) in pyspark, and I want to repartition it into ten groups with each containing 10 rows.

from pyspark import SparkContext, SparkConf
from pyspark.mllib import *
sc = SparkContext("local", "Simple App")
dm2 = Matrices.dense(100, 100, RandomRDDs.uniformRDD(sc, 10000).collect())
newRdd = sc.parallelize(dm2.toArray())
rerdd = newRdd.repartition(10)

the above code results in rerdd containing 100 elements. I want to present this matrix dm2 as row-wise partitioned blocks (e.g., 10 rows in a partition).


回答1:


I doesn't make much sense but you can for example do something like this

mat =  Matrices.dense(100, 100, np.arange(10000))

n_par = 10
n_row = 100

rdd = (sc
    .parallelize(
        # Add indices
        enumerate(
            # Extract and reshape values
            mat.values.reshape(n_row, -1)))
    # Partition and sort by row index
    .repartitionAndSortWithinPartitions(n_par, lambda i: i // n_par))

Check number of partitions and rows per partition:

rdd.glom().map(len).collect()
## [10, 10, 10, 10, 10, 10, 10, 10, 10, 10

Check if the first row contains desired data:

assert np.all(rdd.first()[1] == np.arange(100))


来源:https://stackoverflow.com/questions/36737566/repartition-a-dense-matrix-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!