可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have to dump large amount of data from file to a table PostgreSQL. I know it does not support 'Ignore' 'replace' etc as done in MySql. Almost all posts regarding this in the web suggested the same thing like dumping the data to a temp table and then do a 'insert ... select ... where not exists...'.
This will not help in one case, where the file data itself contained duplicate primary keys. Any body have an idea on how to handle this in PostgreSQL?
P.S. I am doing this from a java program, if it helps
回答1:
Use the same approach as you described, but DELETE
(or group, or modify ...) duplicate PK
in the temp table before loading to the main table.
Something like:
CREATE TEMP TABLE tmp_table ON COMMIT DROP AS SELECT * FROM main_table WITH NO DATA; COPY tmp_table FROM 'full/file/name/here'; INSERT INTO main_table SELECT DISTINCT ON (PK_field) * FROM tmp_table ORDER BY (some_fields)
Details: CREATE TABLE AS
, COPY
, DISTINCT ON
回答2:
PostgreSQL 9.5 now has upsert functionality. You can follow Igor's instructions, except that final INSERT includes the clause ON CONFLICT DO NOTHING.
INSERT INTO main_table SELECT * FROM tmp_table ON CONFLICT DO NOTHING
回答3:
Igor’s answer helped me a lot, but I also ran into the problem Nate mentioned in his comment. Then I had the problem―maybe in addition to the question here―that the new data did not only contain duplicates internally but also duplicates with the existing data. What worked for me was the following.
CREATE TEMP TABLE tmp_table AS SELECT * FROM newsletter_subscribers; COPY tmp_table (name, email) FROM stdin DELIMITER ' ' CSV; SELECT count(*) FROM tmp_table; -- Just to be sure TRUNCATE newsletter_subscribers; INSERT INTO newsletter_subscribers SELECT DISTINCT ON (email) * FROM tmp_table ORDER BY email, subscription_status; SELECT count(*) FROM newsletter_subscribers; -- Paranoid again
Both internal and external duplicates become the same in the tmp_table
and then the DISTINCT ON (email)
part removes them. The ORDER BY
makes sure that the desired row comes first in the result set and DISTINCT
then discards all further rows.
回答4:
Insert into a temp table grouped by the key so you get rid of the duplicates
and then insert if not exists