Count number of duplicate rows in SPARKSQL

房东的猫 提交于 2019-11-29 07:46:35

You essentially want to groupBy() all the columns and count(), then select the sum of the counts for the rows where the count is greater than 1.

import pyspark.sql.functions as f
df.groupBy(df.columns)\
    .count()\
    .where(f.col('count') > 1)\
    .select(f.sum('count'))\
    .show()

Explanation

After the grouping and aggregation, your data will look like this:

+---+---+---+---+
| 1 | A | B | 2 |
+---+---+---+---+
| 2 | B | E | 2 |
+---+---+---+---+
| 3 | D | G | 1 |
+---+---+---+---+
| 4 | D | G | 1 |
+---+---+---+---+

Then use where() to filter only the rows with a count greater than 1, and select the sum. In this case, you will get the first 2 rows, which sum to 4.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!