Optimize Spark job that has to calculate each to each entry similarity and output top N similar items for each

纵然是瞬间 提交于 2019-11-28 08:46:11

It can be done more efficiently, as long as you are fine with approximations, and don't require exact results (or exact number or results).

Similarly to my answer to Efficient string matching in Apache Spark you can use LSH, with:

If feature space is small (or can be reasonably reduced) and each category is relatively small you can also optimize your code by hand:

  • explode feature array to generate #features records from a single record.
  • Self join result by feature, compute distance and filter out candidates (each pair of records will be compared if and only if they share specific categorical feature).
  • Take top records using your current code.

A minimal example would be (consider it to be a pseudocode):

import org.apache.spark.ml.linalg._

// This is oversimplified. In practice don't assume only sparse scenario
val indices = udf((v: SparseVector) => v.indices)

val df = Seq(
  (1L, Vectors.sparse(1024, Array(1, 3, 5), Array(1.0, 1.0, 1.0))),
  (2L, Vectors.sparse(1024, Array(3, 8, 12), Array(1.0, 1.0, 1.0))),
  (3L, Vectors.sparse(1024, Array(3, 5), Array(1.0, 1.0))),
  (4L, Vectors.sparse(1024, Array(11, 21), Array(1.0, 1.0))),
  (5L, Vectors.sparse(1024, Array(21, 32), Array(1.0, 1.0)))
).toDF("id", "features")

val possibleMatches = df
  .withColumn("key", explode(indices($"features")))
  .transform(df => df.alias("left").join(df.alias("right"), Seq("key")))

val closeEnough(threshold: Double) = udf((v1: SparseVector, v2: SparseVector) =>  intersectionCosine(v1, v2) > threshold)

possilbeMatches.filter(closeEnough($"left.features", $"right.features")).select($"left.id", $"right.id").distinct

Note that both solutions are worth the overhead only if hashing / features are selective enough (and optimally sparse). In the example shown above you'd compare only rows inside set {1, 2, 3} and {4, 5}, never between sets.

However in the worst case scenario (M records, N features) we can make N M2 comparisons, instead of M2

Another thought.. Given that your matrix is relatively small and sparse, it can fit in memory using breeze CSCMatrix[Int].

Then, you can compute co-occurrences using A'B (A.transposed * B) followed by a TopN selection of the LLR (logLikelyhood ratio) of each pairs. Here, since you keep only 10 top items per row, the output matrix will be very sparse as well.

You can lookup the details here:

https://github.com/actionml/universal-recommender

You can borrow from the idea of locality sensitive hashing. Here is one approach:

  • Define a set of hash keys based on your matching requirements. You would use these keys to find potential matches. For example, a possible hash key could be based on the movie actor vector.
  • Perform reduce for each key. This will give sets of potential matches. For each of the potential matched set, perform your "exact match". The exact match will produce sets of exact matches.
  • Run Connected Component algorithm to perform set merge to get the sets of all exact matches.

I have implemented something similar using the above approach.

Hope this helps.

Another possible solution would be to use builtin RowMatrix and brute force columnSimilarity as explained on databricks:

https://databricks.com/blog/2014/10/20/efficient-similarity-algorithm-now-in-spark-twitter.html

https://datascience.stackexchange.com/questions/14862/spark-item-similarity-recommendation

Notes:

  • Keep in mind that you will always have N^2 values in resulting similarity matrix
  • You will have to concatenate your sparse vectors

One very important suggestion , that i have used in similar scenarios is if some movie

relation     similarity score
A-> B        8/10
B->C         7/10
C->D         9/10

If 

E-> A       4  //less that some threshold or hyperparameter
Don't calculate similarity for
E-> B
E-> C 
E->D
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!