Remove duplicates from PySpark array column

こ雲淡風輕ζ 提交于 2021-02-08 06:48:50

问题


I have a PySpark Dataframe that contains an ArrayType(StringType()) column. This column contains duplicate strings inside the array which I need to remove. For example, one row entry could look like [milk, bread, milk, toast]. Let's say my dataframe is named df and my column is named arraycol. I need something like:

df = df.withColumn("arraycol_without_dupes", F.remove_dupes_from_array("arraycol"))

My intution was that there exists a simple solution to this, but after browsing stackoverflow for 15 minutes I didn't find anything better than exploding the column, removing duplicates on the complete dataframe, then grouping again. There has got to be a simpler way that I just didn't think of, right?

I am using Spark version 2.4.0


回答1:


For pyspark version 2.4+, you can use pyspark.sql.functions.array_distinct:

from pyspark.sql.functions import array_distinct
df = df.withColumn("arraycol_without_dupes", array_distinct("arraycol"))

For older versions, you can do this with the API functions using explode + groupBy and collect_set, but a udf is probably more efficient here:

from pyspark.sql.functions import udf

remove_dupes_from_array = udf(lambda row: list(set(row)), ArrayType(StringType()))
df = df.withColumn("arraycol_without_dupes", remove_dupes_from_array("arraycol"))


来源:https://stackoverflow.com/questions/54185710/remove-duplicates-from-pyspark-array-column

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!