问题
So I have a table with a large dataset and this table has a three columns that I would like to drop.
The question is: how will Postgres deal with it?
Will it walk through every entry or will it just update mapping info without much overhead?
Can I just make an ALTER TABLE
or should I use swap-table in this particular case?
And, if it makes any difference, all three columns have fixed length (two integers and one numeric).
I'm sorry if it's been asked already, but Google couldn't find any related questions / articles ...
回答1:
ALTER TABLE DROP COLUMN does just only disabling columns in system tables. It is very fast, but it doesn't remove data from heap files. You have to do VACUUM FULL later to compact allocated file space. So ALTER TABLE DROP COLUMN is very fast. And you you would to compact files, you have to call slower (with exclusive LOCK) VACUUM FULL.
回答2:
Google may be useless for this question, but the manual rarely fails:
The
DROP COLUMN
form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated.
And:
To force an immediate rewrite of the table, you can use VACUUM FULL, CLUSTER or one of the forms of ALTER TABLE that forces a rewrite. This results in no semantically-visible change in the table, but gets rid of no-longer-useful data.
Specifically, the column attisdropped
in the system catalog table pg_attribute is set to TRUE
.
来源:https://stackoverflow.com/questions/15699989/dropping-column-in-postgres-on-a-large-dataset