How do I split the output from mysqldump into smaller files?

前端 未结 18 1504
孤城傲影
孤城傲影 2020-11-27 13:50

I need to move entire tables from one MySQL database to another. I don\'t have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql file

相关标签:
18条回答
  • 2020-11-27 14:00

    You don't need ssh access to either of your servers. Just a mysql[dump] client is fine. With the mysql[dump], you can dump your database and import it again.

    In your PC, you can do something like:

    $ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase

    and you're done. :-)

    hope this helps

    0 讨论(0)
  • 2020-11-27 14:01

    Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:

    sqldumpsplitter.com

    Hope this help.

    0 讨论(0)
  • 2020-11-27 14:03

    Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).

    0 讨论(0)
  • 2020-11-27 14:05

    i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.

    0 讨论(0)
  • 2020-11-27 14:06

    This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:

    #!/bin/bash
    
    ####
    # Split MySQL dump SQL file into one file per table
    # based on https://gist.github.com/jasny/1608062
    ####
    
    #adjust this to your case:
    START="/-- Table structure for table/"
    # or 
    #START="/DROP TABLE IF EXISTS/"
    
    
    if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
            echo "USAGE: extract all tables:"
            echo " $0 DUMP_FILE"
            echo "extract one table:"
            echo " $0 DUMP_FILE [TABLE]"
            exit
    fi
    
    if [ $# -ge 2 ] ; then
            #extract one table $2
            csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1"
    else
            #extract all tables
            csplit -s -ftable $1 "$START" {*}
    fi
     
    [ $? -eq 0 ] || exit
     
    mv table00 head
     
    FILE=`ls -1 table* | tail -n 1`
    if [ $# -ge 2 ] ; then
            # cut off all other tables
            mv $FILE foot
    else
            # cut off the end of each file
            csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*}
            mv ${FILE}1 foot
    fi
     
    for FILE in `ls -1 table*`; do
            NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
            cat head $FILE foot > "$NAME.sql"
    done
     
    rm head foot table*
    

    based on https://gist.github.com/jasny/1608062
    and https://stackoverflow.com/a/16840625/1069083

    0 讨论(0)
  • 2020-11-27 14:06

    Try this: https://github.com/shenli/mysqldump-hugetable It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.

    0 讨论(0)
提交回复
热议问题