LSH Spark stucks forever at approxSimilarityJoin() function

微笑、不失礼 提交于 2020-01-03 01:11:11

问题


I am trying to implement LSH spark to find nearest neighbours for each user on very large datasets containing 50000 rows and ~5000 features for each row. Here is the code related to this.

    MinHashLSH mh = new MinHashLSH().setNumHashTables(3).setInputCol("features")
                        .setOutputCol("hashes");

    MinHashLSHModel model = mh.fit(dataset);

    Dataset<Row> approxSimilarityJoin = model .approxSimilarityJoin(dataset, dataset, config.getJaccardLimit(), "JaccardDistance");

    approxSimilarityJoin.show();

The job gets stuck at approxSimilarityJoin() function and never goes beyond it. Please let me know how to solve it.


回答1:


It will finish if you leave it long enough, however there are some things you can do to speed it up. Reviewing the source code you can see the algorithm

  1. hashes the inputs
  2. joins the 2 datasets on the hashes
  3. computes the jaccard distance using a udf and
  4. filters the dataset with your threshold.

https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala

The join is probably the slow part here as the data is shuffled. So some things to try:

  1. change your dataframe input partitioning
  2. change spark.sql.shuffle.partitions (the default gives you 200 partitions after a join)
  3. your dataset looks small enough where you could use spark.sql.functions.broadcast(dataset) for a map-side join
  4. Are these vectors sparse or dense? the algorithm works better with sparseVectors.

Of these 4 options 2 and 3 have worked best for me while always using sparseVectors.



来源:https://stackoverflow.com/questions/48927221/lsh-spark-stucks-forever-at-approxsimilarityjoin-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!