问题
I need to compute pairwise symmetric scores for items of a list in Spark. I.e. score(x[i],x[j]) = score(x[j], x[i])
. One solution is to use x.cartesian(x)
. However it will perform x**2
operations instead of minimal necessary x*(x+1)//2
.
What is the most efficient remeady for this issue in Spark?
PS. In pure Python I would use iterator like:
class uptrsq_range(object):
def __init__(self, n):
self._n_ = n
self._length = n*(n+1) // 2
def __iter__(self):
for ii in range(self._n_):
for jj in range(ii+1):
yield (ii,jj)
def __len__(self):
"""
recepe by sleblanc @ stackoverflow
"""
"This method returns the total number of elements"
if self._length:
return self._length
else:
raise NotImplementedError("Infinite sequence has no length")
# or simply return None / 0 depending
# on implementation
for i,j in uptrsq_range(len(x)):
score(x[i], x[j])
回答1:
The most universal approach is to follow cartesian
with filter
. For example:
rdd = sc.parallelize(range(10))
pairs = rdd.cartesian(rdd).filter(lambda x: x[0] < x[1])
pairs.count()
## 45
If RDD is relatively small you can collect, broadcast and flatMap
:
xs = sc.broadcast(rdd.collect())
pairs = rdd.flatMap(lambda y: [(x, y) for x in xs.value if x < y])
pairs.count()
## 45
This is particularly useful if data can be further filtered inside flatMap
to reduce number of yielded values.
If data is too large to be collected / stored in memory but can be easily computed (like a range of numbers) or can be efficiently accessed from the worker (locally accessible database) you can flatMap
as above or use mapPartitions
for example like this:
def some_function(iter):
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
query = ...
for x in iter:
# fetch some data from a database
c.execute(query, (x, ))
for y in c.fetchall():
yield (x, y)
rdd.mapPartitions(some_function)
来源:https://stackoverflow.com/questions/34111965/upper-triangle-of-cartesian-in-spark-for-symmetric-operations-xx1-2-inst