问题
I have requirement where i need to count number of duplicate rows in SparkSQL for Hive tables.
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
from pyspark.sql.types import *
from pyspark.sql import Row
app_name="test"
conf = SparkConf().setAppName(app_name)
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
df = sqlContext.sql("select * from DV_BDFRAWZPH_NOGBD_R000_SG.employee")
As of now i have hardcoded the table name, but it actually comes as parameter. That being said we don't know the number of columns or their names as well.In python pandas we have something like df.duplicated.sum() to count number of duplicate records. Do we have something like this here?
+---+---+---+
| 1 | A | B |
+---+---+---+
| 1 | A | B |
+---+---+---+
| 2 | B | E |
+---+---+---+
| 2 | B | E |
+---+---+---+
| 3 | D | G |
+---+---+---+
| 4 | D | G |
+---+---+---+
Here number of duplicate rows are 4. (for example)
回答1:
You essentially want to groupBy()
all the columns and count()
, then select the sum of the counts for the rows where the count is greater than 1.
import pyspark.sql.functions as f
df.groupBy(df.columns)\
.count()\
.where(f.col('count') > 1)\
.select(f.sum('count'))\
.show()
Explanation
After the grouping and aggregation, your data will look like this:
+---+---+---+---+
| 1 | A | B | 2 |
+---+---+---+---+
| 2 | B | E | 2 |
+---+---+---+---+
| 3 | D | G | 1 |
+---+---+---+---+
| 4 | D | G | 1 |
+---+---+---+---+
Then use where()
to filter only the rows with a count greater than 1, and select the sum. In this case, you will get the first 2 rows, which sum to 4.
来源:https://stackoverflow.com/questions/48554619/count-number-of-duplicate-rows-in-sparksql