Best way to prevent duplicate data on copy csv postgresql

為{幸葍}努か 提交于 2019-12-24 11:34:08

问题


This is more of a conceptual question because I'm planning how best to achieve our goals here.

I have a postgresql/postgis table with 5 columns. I'll be inserting/appending data into the database from a csv file every 10 minutes or so via the copy command. There will likely be some duplicate rows of data, so I'd like to copy the data from the csv file to the postgresql table but prevent any duplicate entries from getting into the table from the csv file. There are three columns, where if they are all equal, that will mean the entry is a duplicate. They are "latitude", "longitude" and "time". Should I make a composite key from all three columns? If I do that, will it just throw an error upon trying to copy the csv file into the database? I'm going to be copying the csv file automatically so I would want it to go ahead and copy the rest of the file that aren't duplicates and not copy the duplicates. Is there a way to do this?

Also, I of course want it to look for duplicates in the most efficient way. I don't need to look through the whole table (which will be quite large) for duplicates...just the past 20 minutes or so via the timestamp on the row. And I've indexed the db with the time column.

Thanks for any help!


回答1:


I think I would take the following approach.

First, create an index on the three columns that you care about:

create unique index idx_bigtable_col1_col2_col3 on bigtable(col1, col2, col3);

Then, load the data into a staging table using copy. Finally, you can do:

insert into bigtable(col1, . . . )
    select col1, . . .
    from stagingtable st
    where (col1, col2, col3) not in (select col1, col2, col3 from bigtable);

Assuming no other data modifications are going on, this should accomplish what you want. Checking for duplicates using the index should be ok from a performance perspective.

An alternative method is to emulates MySQL's "on duplicate key update" to ignore such records. Bill Karwin suggests implementing a rule in an answer to this question. The documentation for rules is here. Something similar could also be done with triggers.




回答2:


Upsert

The Answer by Linoff is correct but can simplified a bit by Postgres 9.5 new ”UPSERT“ feature (a.k.a. MERGE). That new feature is implemented in Postgres as INSERT ON CONFLICT syntax.

Rather than explicitly check for violation of the unique index, we can let the ON CONFLICT clause detect the violation. Then we DO NOTHING, meaning we abandon the effort to INSERT without bothering to attempt an UPDATE. So if we cannot insert, we just move on to next row.

We get the same results as Linoff’s code but lose the WHERE clause.

INSERT INTO bigtable(col1, … )
    SELECT col1, …
    FROM stagingtable st
ON CONFLICT idx_bigtable_col1_col2_col
DO NOTHING
;


来源:https://stackoverflow.com/questions/31639108/best-way-to-prevent-duplicate-data-on-copy-csv-postgresql

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!