问题
i need a Pyspark solution for Pandas drop_duplicates(keep=False)
. Unfortunately, the keep=False
option is not available in pyspark...
Pandas Example:
import pandas as pd
df_data = {'A': ['foo', 'foo', 'bar'],
'B': [3, 3, 5],
'C': ['one', 'two', 'three']}
df = pd.DataFrame(data=df_data)
df = df.drop_duplicates(subset=['A', 'B'], keep=False)
print(df)
Expected output:
A B C
2 bar 5 three
A conversion .to_pandas()
and back to pyspark is not an option.
Thanks!
回答1:
Use window function to count the number of rows for each A / B
combination, and then filter the result to keep only rows that are unique:
import pyspark.sql.functions as f
df.selectExpr(
'*',
'count(*) over (partition by A, B) as cnt'
).filter(f.col('cnt') == 1).drop('cnt').show()
+---+---+-----+
| A| B| C|
+---+---+-----+
|bar| 5|three|
+---+---+-----+
Or another option using pandas_udf
:
from pyspark.sql.functions import pandas_udf, PandasUDFType
# keep_unique returns the data frame if it has only one row, otherwise
# drop the group
@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
def keep_unique(df):
return df.iloc[:0] if len(df) > 1 else df
df.groupBy('A', 'B').apply(keep_unique).show()
+---+---+-----+
| A| B| C|
+---+---+-----+
|bar| 5|three|
+---+---+-----+
来源:https://stackoverflow.com/questions/54116465/pyspark-drop-duplicateskeep-false