Migrating `int` to `bigint` in PostgresSQL without any downtime?

[亡魂溺海] 提交于 2019-12-04 03:47:50

问题


I have a database that is going to experience the integer exhaustion problem that Basecamp famously faced back in November. I have several months to figure out what to do.

Is there a no-downtime-required, proactive solution to migrating this column type? If so what is it? If not, is it just a matter of eating the downtime and migrating the column when I can?

Is this article sufficient, assuming I have several days/weeks to perform the migration now before I'm forced to do it when I run out of ids?


回答1:


Use logical replication.

With logical replication you can have different data types at primary and standby.

Copy the schema with pg_dump -s, change the data types on the copy and then start logical replication.

Once all data is copied over, switch the application to use the standby.

For zero down time, the application has to be able to reconnect and retry, but that's always a requirement in such a case.

You need PostgreSQL v10 or better for that, and your database shouldn't modify the schema, as DDL is not replicated.




回答2:


Create a copy of the old table but with modified ID field. Next create a trigger on the old table that inserts new data to both tables. Finally copy data from the old table to the new one (it would be a good idea to distinguish pre-trigger data with post-trigger for example by id if it is sequential). Once you are done switch tables and delete the old one.

This obviously requires twice as much space (and time for copy) but will work without any downtime.




回答3:


Another solution for pre-v10 databases where all transactions are short:

  • Add a bigint column to the table.

  • Create a BEFORE trigger that sets the new column whenever a row is added or updated.

  • Run a series of updates that set the new column from the old one where it IS NULL. Keep those batches short so you don't lock long and don't deadlock much. Make sure these transaction run with session_replication_role = replica so they don't trigger triggers.

  • Once all rows are updated, create a unique index CONCURRENTLY on the new column.

  • Add a unique constraint USING the index you just created. That will be fast.

  • Perform the switch:

    BEGIN;
    ALTER TABLE ... DROP oldcol;
    ALTER TABLE ... ALTER newcol RENAME TO oldcol;
    COMMIT;
    

    That will be fast.

Your new column has no NOT NULL set. This cannot be done without a long invasive lock. But you can add a check constraint IS NOT NULL and create it NOT VALID. That is good enough, and you can later validate it without disruptions.

If there are foreign key constraints, things get a little more complicated. You have to drop these and create NOT VALID foreign keys to the new column.



来源:https://stackoverflow.com/questions/54795701/migrating-int-to-bigint-in-postgressql-without-any-downtime

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!