How can I delete one of two perfectly identical rows?

时光毁灭记忆、已成空白 提交于 2019-12-03 03:10:12

问题


I am cleaning out a database table without a primary key (I know, I know, what were they thinking?). I cannot add a primary key, because there is a duplicate in the column that would become the key. The duplicate value comes from one of two rows that are in all respects identical. I can't delete the row via a GUI (in this case MySQL Workbench, but I'm looking for a database agnostic approach) because it refuses to perform tasks on tables without primary keys (or at least a UQ NN column), and I cannot add a primary key, because there is a duplicate in the column that would become the key. The duplicate value comes from one...

How can I delete one of the twins?


回答1:


One option to solve your problem is to create a new table with the same schema, and then do:

INSERT INTO new_table (SELECT DISTINCT * FROM old_table)

and then just rename the tables.

You will of course need approximately the same amount of space as your table requires spare on your disk to do this!

It's not efficient, but it's incredibly simple.




回答2:


SET ROWCOUNT 1
DELETE FROM [table] WHERE ....
SET ROWCOUNT 0

This will only delete one of the two identical rows




回答3:


Note that MySQL has its own extension of DELETE, which is DELETE ... LIMIT, which works in the usual way you'd expect from LIMIT: http://dev.mysql.com/doc/refman/5.0/en/delete.html

The MySQL-specific LIMIT row_count option to DELETE tells the server the maximum number of rows to be deleted before control is returned to the client. This can be used to ensure that a given DELETE statement does not take too much time. You can simply repeat the DELETE statement until the number of affected rows is less than the LIMIT value.

Therefore, you could use DELETE FROM some_table WHERE x="y" AND foo="bar" LIMIT 1; note that there isn't a simple way to say "delete everything except one" - just keep checking whether you still have row duplicates.




回答4:


For PostgreSQL you can do this:

DELETE FROM tablename
WHERE id IN (SELECT id
          FROM (SELECT id, ROW_NUMBER() 
               OVER (partition BY column1, column2, column3 ORDER BY id) AS rnum
               FROM tablename) t
          WHERE t.rnum > 1);

column1, column2, column3 would the column set which have duplicate values.

Reference here.




回答5:


delete top(1) works on Microsoft SQL Server (T-SQL).




回答6:


This can be accomplished using a CTE and the ROW_NUMBER() function, as below:

/* Sample Data */
    CREATE TABLE #dupes (ID INT, DWCreated DATETIME2(3))

    INSERT INTO #dupes (ID, DWCreated) SELECT 1, '2015-08-03 01:02:03.456'
    INSERT INTO #dupes (ID, DWCreated) SELECT 2, '2014-08-03 01:02:03.456'
    INSERT INTO #dupes (ID, DWCreated) SELECT 1, '2013-08-03 01:02:03.456'

/* Check sample data - returns three rows, with two rows for ID#1 */
    SELECT * FROM #dupes 

/* CTE to give each row that shares an ID a unique number */
    ;WITH toDelete AS
      (
        SELECT ID, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY DWCreated) AS RN
        FROM #dupes 
      )

  /* Delete any row that is not the first instance of an ID */
    DELETE FROM toDelete WHERE RN > 1

/* Check the results: ID is now unique */
    SELECT * FROM #dupes

/* Clean up */
    DROP TABLE #dupes

Having a column to ORDER BY is handy, but not necessary unless you have a preference for which of the rows to delete. This will also handle all instances of duplicate records, rather than forcing you to delete one row at a time.




回答7:


Tried LIMIT 1? This will only delete 1 of the rows that match your DELETE query

DELETE FROM `table_name` WHERE `column_name`='value' LIMIT 1;



回答8:


You could use a max, which was relevant in my case.

DELETE FROM [table] where id in 
(select max(id) from [table] group by id, col2, col3 having count(id) > 1)

Be sure to test your results first and having a limiting condition in your "having" clausule. With such a huge delete query you might want to update your database first.




回答9:


In my case I could get the GUI to give me a string of values of the row in question (alternatively, I could have done this by hand). On the suggestion of a colleague, in whose debt I remain, I used this to create an INSERT statement:

INSERT
'ID1219243408800307444663', '2004-01-20 10:20:55', 'INFORMATION', 'admin' (...)
INTO some_table;

I tested the insert statement, so that I now had triplets. Finally, I ran a simple DELETE to remove all of them...

DELETE FROM some_table WHERE logid = 'ID1219243408800307444663';

followed by the INSERT one more time, leaving me with a single row, and the bright possibilities of a primary key.




回答10:


in case you can add a column like

  ALTER TABLE yourtable ADD IDCOLUMN bigint NOT NULL IDENTITY (1, 1)

do so.

then count rows grouping by your problem column where count >1 , this will identify your twins (or triplets or whatever).

then select your problem column where its content equals the identified content of above and check the IDs in IDCOLUMN.

delete from your table where IDCOLUMN equals one of those IDs.




回答11:


This works for PostgreSQL

DELETE FROM tablename WHERE id = 123 AND ctid IN (SELECT ctid FROM tablename WHERE id = 123 LIMIT 1)



回答12:


I added a Guid column to the table and set it to generate a new id for each row. Then I could delete the rows using a GUI.




回答13:


In PostgreSQL there is an implicit column called ctid. See the wiki. So you are free to use the following:

WITH cte1 as(
    SELECT unique_column, max( ctid ) as max_ctid
    FROM table_1
    GROUP BY unique_column
    HAVING count(*) > 1
), cte2 as(
    SELECT t.ctid as target_ctid
    FROM table_1 t
    JOIN cte1 USING( unique_column )
    WHERE t.ctid != max_ctid
)
DELETE FROM table_1
WHERE ctid IN( SELECT target_ctid FROM cte2 )

I'm not sure how safe it is to use this when there is a possibility of concurrent updates. So one may find it sensible to make a LOCK TABLE table_1 IN ACCESS EXCLUSIVE MODE; before actually doing the cleanup.




回答14:


In case there are multiple duplicate rows to delete and all fields are identical, no different id, the table has no primary key , one option is to save the duplicate rows with distinct in a new table, delete all duplicate rows and insert the rows back. This is helpful if the table is really big and the number of duplicate rows is small.

---  col1 , col2 ... coln are the table columns that are relevant. 
--- if not sure add all columns of the table in the select bellow and the where clause later. 

--- make a copy of the table T to be sure you can rollback anytime , if possible
--- check the @@rowcount to be sure it's what you want
--- use transactions and rollback in case there is an error 

--- first find all with duplicate rows that are identical , this statement could be joined 
--- with the first one if you choose all columns 

select col1,col2, --- other columns as needed
  count(*) c into temp_duplicate group by col1,col2 having count(*) > 1 

--- save all the rows that are identical only once ( DISTINCT ) 

insert distinct * into temp_insert from T , temp_duplicate D where
T.col1 = D.col1 and
T.col2 = D.col2 --- and other columns if needed

--- delete all the rows that are duplicate

delete T from T , temp_duplicate D where 
T.col1 = D.col1 and
T.col2 = D.col2 ---- and other columns if needed

--- add the duplicate rows , now only once
insert into T select * from temp_insert 

--- drop the temp tables after you check all is ok 



回答15:


If, like me, you don't want to have to list out all the columns of the database, you can convert each row to JSONB and compare by that.

(NOTE: This is incredibly inefficient - be careful!)

select to_jsonb(a.*), to_jsonb(b.*)
FROM
    table a
        left join table b
on
    a.entry_date < b.entry_date
where (SELECT NOT exists(
    SELECT
    FROM jsonb_each_text(to_jsonb(a.*) - 'unwanted_column') t1
         FULL OUTER JOIN jsonb_each_text(to_jsonb(b.*) - 'unwanted_column') t2 USING (key)
    WHERE t1.value<>t2.value OR t1.key IS NULL OR t2.key IS NULL
))


来源:https://stackoverflow.com/questions/16438323/how-can-i-delete-one-of-two-perfectly-identical-rows

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!