Pyspark drop_duplicates(keep=False)

只谈情不闲聊 提交于 2019-12-02 03:15:22

Use window function to count the number of rows for each A / B combination, and then filter the result to keep only rows that are unique:

import pyspark.sql.functions as f

df.selectExpr(
  '*', 
  'count(*) over (partition by A, B) as cnt'
).filter(f.col('cnt') == 1).drop('cnt').show()

+---+---+-----+
|  A|  B|    C|
+---+---+-----+
|bar|  5|three|
+---+---+-----+

Or another option using pandas_udf:

from pyspark.sql.functions import pandas_udf, PandasUDFType

# keep_unique returns the data frame if it has only one row, otherwise 
# drop the group
@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
def keep_unique(df):
    return df.iloc[:0] if len(df) > 1 else df

df.groupBy('A', 'B').apply(keep_unique).show()
+---+---+-----+
|  A|  B|    C|
+---+---+-----+
|bar|  5|three|
+---+---+-----+
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!