问题
I have 2 databases, with ~100,000 rows missing from the table field_collection_item
from db1
, which I'd like to repair by exporting from db2
.
My plan to accomplish this was to:
- identify the missing items by
item_id
indb2
, exporting the list ofitem_id
s. - import the
item_id
s intodb1
into a new tablemissing_field_collection_item
Using the following mysqldump to pull the data:
mysqldump -u USER -pPASS DATABASE --no-create-info --tables field_collection_item --where="item_id IN (SELECT item_id FROM missing_field_collection_item);"
however this gives the error:
Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */ * FROM `field_collection_item` WHERE item_id IN (SELECT item_id FROM missing_field_collection_item);': Table 'missing_field_collection_item' was not locked with LOCK TABLES (1100)
I would prefer to do this without making changes to db2
however it's not strictly necessary, if it turns out the only realistic way to do this is to drop the rows I don't want and then dump without a where clause.
UPDATE
I discovered the above works simply by adding --single-transaction
, which seems to turn off locking. This should be safe since db2
isn't live, however I'm not confident I understand any side-effects, so I won't accept this as the answer without a second opinion.
回答1:
If your tables are MyISAM, the safest, easiest way to handle this is to pass the flag --lock-all-tables
. If your tables are InnoDB then --single-transaction
is better.
回答2:
If you don't need a consistency guarantee, you can disable locking without single transaction by adding:
--lock-tables=false
I use this to do the same thing you're after (dumping subsets of the data) and on replication slaves that I can stop (making it consistent anyway).
An advantage over --single-transaction
is that you can use/mix non-MVCC-engine tables.
来源:https://stackoverflow.com/questions/18053229/mysqldump-with-where-id-in-select-yields-table-was-not-locked-error