问题
I am using spark's python API and I am finding a few matrix operations challenging. My RDD is one dimensional list of length n (row vector). Is it possible to reshape it to a matrix/multidimensional array of size sq_root(n) x Sq_root(n).
for example,
Vec=[1,2,3,4,5,6,7,8,9]
and desired output 3 x 3=
[[1,2,3]
[4,5,6]
[7,8,9]]
Is there an equivalent to reshape in numpy?
Conditions: n (>50 million) is huge so that rules out using .collect(), and can this process be made to run on multiple threads?
回答1:
Something like this should to the trick:
rdd = sc.parallelize(xrange(1, 10))
nrow = int(rdd.count() ** 0.5) # Compute number of rows
rows = (rdd.
zipWithIndex(). # Add index, we assume that data is sorted
groupBy(lambda (x, i): i / nrow). # Group by row
# Order by column and drop index
mapValues(lambda vals: [x for (x, i) in sorted(vals, key=lambda (x, i): i)])))
You can add:
from pyspark.mllib.linalg import DenseVector
rows.mapValues(DenseVector)
if you want proper vectors.
来源:https://stackoverflow.com/questions/31597151/rdd-to-multidimensional-array