Does mysqldump handle binary data reliably?

后端 未结 3 1605
小鲜肉
小鲜肉 2020-12-31 00:38

I have some tables in MySQL 5.6 that contain large binary data in some fields. I want to know if I can trust dumps created by mysqldump and be sure that those b

3条回答
  •  暖寄归人
    2020-12-31 00:56

    No, it is not always reliable when you have binary blobs. In that case you MUST use the "--hex-blob" flag to get correct results.

    I have a case where these calls fail (importing on a different server but both running Centos6/MariaDB 10):

    mysqldump --single-transaction --routines --databases myalarm -uroot -p"PASSWORD" | gzip > /FILENAME.sql.gz
    gunzip < FILENAME.sql.gz | mysql -p"PASSWORD" -uroot --comments
    

    It produces a file that silently fails to import. Adding "--skip-extended-insert" gives me a file that's much easier to debug, and I find that this line is generated but can't be read (but no error is reported either exporting or importing):

    INSERT INTO `panels` VALUES (1003,1,257126,141,6562,1,88891,'??\\\?ŖeV???,NULL);
    

    Note that the terminating quote on the binary data is missing in the original.

    select hex(packet_key) from panels where id=1003;
    --> DE77CF5C075CE002C596176556AAF9ED
    

    The column is binary data:

    CREATE TABLE `panels` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `enabled` tinyint(1) NOT NULL DEFAULT '1',
      `serial_number` int(10) unsigned NOT NULL,
      `panel_types_id` int(11) NOT NULL,
      `all_panels_id` int(11) NOT NULL,
      `installers_id` int(11) DEFAULT NULL,
      `users_id` int(11) DEFAULT NULL,
      `packet_key` binary(16) NOT NULL,
      `user_deleted` timestamp NULL DEFAULT NULL,
      PRIMARY KEY (`id`),
      ...
    

    So no, not only can you not necessarily trust mysqldump, you can't even rely on it to report an error when one occurs.


    An ugly workaround I used was to mysqldump excluding the two afflicted tables by adding options like this to the dump:

    --ignore-table=myalarm.panels 
    

    Then this BASH script hack. Basically run a SELECT that produces INSERT values where the NULL columns are handled and the binary column gets turned into an UNHEX() call like so:

    (123,45678,UNHEX("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"),"2014-03-17 00:00:00",NULL),
    

    Paste it into your editor of choice to play with it if you need to.

    echo "SET UNIQUE_CHECKS=0;SET FOREIGN_KEY_CHECKS=0;DELETE FROM panels;INSERT INTO panels VALUES " > all.sql
    mysql -uroot -p"PASSWORD" databasename -e "SELECT CONCAT('(',id,',', enabled,',', serial_number,',', panel_types_id,',', all_panels_id,',', IFNULL(CONVERT(installers_id,CHAR(20)),'NULL'),',', IFNULL(CONVERT(users_id,CHAR(20)),'NULL'), ',UNHEX(\"',HEX(packet_key),'\"),', IF(ISNULL(user_deleted),'NULL',CONCAT('\"', user_deleted,'\"')),'),') FROM panels" >> all.sql
    echo "SET UNIQUE_CHECKS=1;SET FOREIGN_KEY_CHECKS=1;" > all.sql
    

    That gives me a file called "all.sql" that needs the final comma in the INSERT turned into a semicolon, then it can be run as above. I needed the "large import buffer" tweaks set in both the interactive mysql shell and the command line to process that file because it's large.

    mysql ... --max_allowed_packet=1GB
    

    When I reported the bug I was eventually pointed at the "--hex-blob" flag, which does the same as my workaround but in a trivial from my side way. Add that option, blobs get dumped as hex, the end.

提交回复
热议问题