How do I split the output from mysqldump into smaller files?

前端 未结 18 1505
孤城傲影
孤城傲影 2020-11-27 13:50

I need to move entire tables from one MySQL database to another. I don\'t have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql file

相关标签:
18条回答
  • 2020-11-27 14:08

    I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.

    https://github.com/kloddant/split_sql_dump_file

    import sys, re, os
    
    if sys.version_info[0] < 3:
        raise Exception("""Must be using Python 3.  Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")
    
    sqldump_path = input("Enter the path to the sql dump file: ")
    
    if not os.path.exists(sqldump_path):
        raise Exception("Invalid sql dump path.  {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))
    
    output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')
    
    if not os.path.exists(output_folder_path):
        os.makedirs(output_folder_path)
    
    table_name = None
    output_file_path = None
    smallfile = None
    
    with open(sqldump_path, 'rb') as bigfile:
        for line_number, line in enumerate(bigfile):
            line_string = line.decode("utf-8")
            if 'CREATE TABLE' in line_string.upper():
                match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
                if match:
                    table_name = match.group('table')
                    print(table_name)
                    output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
                    if smallfile:
                        smallfile.close()
                    smallfile = open(output_file_path, 'wb')
            if not table_name:
                continue
            smallfile.write(line)
        smallfile.close()
    
    0 讨论(0)
  • 2020-11-27 14:09

    Late reply but was looking for same solution and came across following code from below website:

    for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done
    

    http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files

    0 讨论(0)
  • 2020-11-27 14:09

    This script should do it:

    #!/bin/sh
    
    #edit these
    USER=""
    PASSWORD=""
    MYSQLDIR="/path/to/backupdir"
    
    MYSQLDUMP="/usr/bin/mysqldump"
    MYSQL="/usr/bin/mysql"
    
    echo - Dumping tables for each DB
    databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
    for db in $databases; do
        echo - Creating "$db" DB
        mkdir $MYSQLDIR/$db
        chmod -R 777 $MYSQLDIR/$db
        for tb in `$MYSQL  --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
            do 
                echo -- Creating table $tb
                $MYSQLDUMP --opt  --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
        done
        echo
    done
    
    0 讨论(0)
  • 2020-11-27 14:11

    You can split existent file by AWK. It's very quik and simple

    Let's split table dump by 'tables' :

    cat dump.sql | awk 'BEGIN {output = "comments"; }
    $data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
    { print $data >> output }';
    

    Or you can split dump by 'database'

    cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
    
    0 讨论(0)
  • 2020-11-27 14:12

    I've recently created sqlsplit.com. Try it out.

    0 讨论(0)
  • 2020-11-27 14:17

    First dump the schema (it surely fits in 2Mb, no?)

    mysqldump -d --all-databases 
    

    and restore it.

    Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server

    mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
    
    0 讨论(0)
提交回复
热议问题