Fastest “Get Duplicates” SQL script

女生的网名这么多〃 提交于 2019-11-28 15:49:24

问题


What is an example of a fast SQL to get duplicates in datasets with hundreds of thousands of records. I typically use something like:

SELECT afield1, afield2 FROM afile a 
WHERE 1 < (SELECT count(afield1) FROM afile b WHERE a.afield1 = b.afield1);

But this is quite slow.


回答1:


This is the more direct way:

select afield1,count(afield1) from atable 
group by afield1 having count(afield1) > 1



回答2:


You could try:

select afield1, afield2 from afile a
where afield1 in
( select afield1
  from afile
  group by afield1
  having count(*) > 1
);



回答3:


A similar question was asked last week. There are some good answers there.

SQL to find duplicate entries (within a group)

In that question, the OP was interested in all the columns (fields) in the table (file), but rows belonged in the same group if they had the same key value (afield1).

There are three kinds of answers:

subqueries in the where clause, like some of the other answers in here.

an inner join between the table and the groups viewed as a table (my answer)

and analytic queries (something that's new to me).




回答4:


By the way, if anyone wants to remove the duplicates, I have used this:

delete from MyTable where MyTableID in (
  select max(MyTableID)
  from MyTable
  group by Thing1, Thing2, Thing3
  having count(*) > 1
)



回答5:


This should be reasonably fast (even faster if the dupeFields are indexed).

SELECT DISTINCT a.id, a.dupeField1, a.dupeField2
FROM TableX a
JOIN TableX b
ON a.dupeField1 = b.dupeField2
AND a.dupeField2 = b.dupeField2
AND a.id != b.id

I guess the only downside to this query is that because you're not doing a COUNT(*) you can't check for the number of times it is duplicated, only that it appears more than once.



来源:https://stackoverflow.com/questions/197111/fastest-get-duplicates-sql-script

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!