pyspark: isin vs join

泄露秘密 提交于 2019-12-03 16:45:00

问题


What are general best-practices to filtering a dataframe in pyspark by a given list of values? Specifically:

Depending on the size of the given list of values, then with respect to runtime when is it best to use isin vs inner join vs broadcast?

This question is the spark analogue of the following question in Pig:

Pig: efficient filtering by loaded list

Additional context:

Pyspark isin function


回答1:


Considering

import pyspark.sql.functions as psf

There are two types of broadcasting:

  • sc.broadcast() to copy python objects to every node for a more efficient use of psf.isin
  • psf.broadcast inside a join to copy your pyspark dataframe to every node when the dataframe is small: df1.join(psf.broadcast(df2)). It is usually used for cartesian products (CROSS JOIN in pig).

In the context question, the filtering was done using the column of another dataframe, hence the possible solution with a join.

Keep in mind that if your filtering list is relatively big the operation of searching through it will take a while, and since it has do be done for each row it can quickly get costly.

Joins on the other hand involve two dataframes that will be sorted before matching, so if your list is small enough you might not want to have to sort a huge dataframe just for a filter.



来源:https://stackoverflow.com/questions/45803888/pyspark-isin-vs-join

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!