Pyspark drop_duplicates(keep=False)

给你一囗甜甜゛ 提交于 2019-12-20 04:25:17

问题


i need a Pyspark solution for Pandas drop_duplicates(keep=False). Unfortunately, the keep=False option is not available in pyspark...

Pandas Example:

import pandas as pd

df_data = {'A': ['foo', 'foo', 'bar'], 
         'B': [3, 3, 5],
         'C': ['one', 'two', 'three']}
df = pd.DataFrame(data=df_data)
df = df.drop_duplicates(subset=['A', 'B'], keep=False)
print(df)

Expected output:

     A  B       C
2  bar  5  three

A conversion .to_pandas() and back to pyspark is not an option.

Thanks!


回答1:


Use window function to count the number of rows for each A / B combination, and then filter the result to keep only rows that are unique:

import pyspark.sql.functions as f

df.selectExpr(
  '*', 
  'count(*) over (partition by A, B) as cnt'
).filter(f.col('cnt') == 1).drop('cnt').show()

+---+---+-----+
|  A|  B|    C|
+---+---+-----+
|bar|  5|three|
+---+---+-----+

Or another option using pandas_udf:

from pyspark.sql.functions import pandas_udf, PandasUDFType

# keep_unique returns the data frame if it has only one row, otherwise 
# drop the group
@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
def keep_unique(df):
    return df.iloc[:0] if len(df) > 1 else df

df.groupBy('A', 'B').apply(keep_unique).show()
+---+---+-----+
|  A|  B|    C|
+---+---+-----+
|bar|  5|three|
+---+---+-----+


来源:https://stackoverflow.com/questions/54116465/pyspark-drop-duplicateskeep-false

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!