I'm writing a single line command that backups all databases into their respective names instead using of dumping all in one sql.
Eg: db1 get saved to db1.sql and db2 gets saved to db2.sql
So far, I'd gathered the following commands to retrieve all databases.
mysql -uuname -ppwd -e 'show databases' | grep -v 'Database'
I'm planning to pipe it with awk to do something like
awk '{mysqldump -uuname -ppwd $1 > $1.sql}'
But that doesn't work.
I'm new to bash, so I could be wrong in my thinking.
What should I do to make it export the db in their respective names?
update:
Ok, have to finally managed to get it working from the hints below.
This is the final script
# replace [] with your own config
# replace own dir to save
# echo doesn't work. hmm...
mysql -u[uname] -p'[pwd]' -e "show databases" \
| grep -Ev 'Database|information_schema' \
| while read dbname; \
do \
echo 'Dumping $dbname' \
mysqldump -u[uanme] -p'[pwd]' $dbname > ~/db_backup/$dbname.sql;\
done
The echoing part of doesn't work though.
mysql -uroot -e 'show databases' | while read dbname; do mysqldump -uroot --complete-insert --some-other-options "$dbname" > "$dbname".sql; done
Creating backups per database is indeed much more efficient. Not only easier to restore once needed, but also I experienced that making a backup of the whole database would break in case one table was broken/corrupt. And by creating backups per database it will only break for that database and the rest is still valid.
The oneliner we created to backup our mysql databases is:
mysql -s -r -u bupuser -pSecret -e 'show databases' | while read db; do mysqldump -u bupuser -pSecret $db -r /var/db-bup/${db}.sql; [[ $? -eq 0 ]] && gzip /var/db-bup/${db}.sql; done
Best to create a new readonly mysql user 'bupuser' with passsword 'Secret' (change!). It will first retrieve the list of databases. Then loop and for each database create a dump.sql file to /var/db-bup (you can change). And only when there are no errors encountered then also gzip the file which will really drastically save storage. When some databases encountered errors then you will see the .sql file and not the .sql.qz file.
Here an easy script that will:
- dump all DB a compress the output ->
SCHEMA_NAME.sql.gz - use [autocommit/unique_checks/foreign_key_checks] to speed up import
- exclude default schemas
File: Dump_all.sh
How to use:
./Dump_all.sh -> will dump all DB
./Dump_all.sh SCHEMA_NAME -> will dump SCHEMA_NAME DB
#!/bin/bash
MYSQL_USER="root"
MYSQL_PASS="YOUR_PASS"
echo "-- START --"
echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > tmp_sqlhead.sql
echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;" > tmp_sqlend.sql
if [ -z "$1" ]
then
echo "-- Dumping all DB ..."
for I in $(mysql -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' -s --skip-column-names);
do
if [ "$I" = information_schema ] || [ "$I" = mysql ] || [ "$I" = phpmyadmin ] || [ "$I" = performance_schema ] # exclude this DB
then
echo "-- Skip $I ..."
continue
fi
echo "-- Dumping $I ..."
# Pipe compress and concat the head/end with the stoutput of mysqlump ( '-' cat argument)
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I | cat tmp_sqlhead.sql - tmp_sqlend.sql | gzip -fc > "$I.sql.gz"
done
else
I=$1;
echo "-- Dumping $I ..."
# Pipe compress and concat the head/end with the stoutput of mysqlump ( '-' cat argument)
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I | cat tmp_sqlhead.sql - tmp_sqlend.sql | gzip -fc > "$I.sql.gz"
fi
# remove tmp files
rm tmp_sqlhead.sql
rm tmp_sqlend.sql
echo "-- FINISH --"
Here is what worked for me
mysql -s -r -uroot -e 'show databases' -N | while read dbname; do
mysqldump -uroot --complete-insert --single-transaction "$dbname" > "$dbname".sql;
done
Not an answer to your question, but take a look at the AutoMySQLBackup project on Sourceforge, instead of re-inventing the wheel. It does what you want, and offers a ton of additional features on top, including compression, encryption, rotation, and email notifications. I used it a while back and it worked really well.
It appears fine. The only thing I can find at the moment (without testing) is that you're missing a semicolong after Show Tables.
While looking for available packages for the AutoMySQLBackup project suggested by @Jeshurun I came accross Holland.
Intrigued by the name (I live in Belgium to the South of The Netherlands, sometimes - or better some parts - referred to as "Holland"), I decided to check it out. Perhaps it can help you as well.
This is what I am using, it's very simple and works fine for me.
mysql --skip-column-names -u root -p -e 'show databases' | while read dbname; do mysqldump --lock-all-tables -u root -p "$dbname"> "$(date +%Y%m%d)-$dbname".sql; done
With compression option:
mysql --skip-column-names -u root -p -e 'show databases' | while read dbname; do mysqldump --lock-all-tables -u root -p "$dbname" | gzip> /tmp/"$(date +%Y%m%d)-$dbname".sql.gz; done
If you didn't add the password in the command, you need to type it one plus the total number of the databases you have.
来源:https://stackoverflow.com/questions/10867520/mysqldump-with-db-in-a-separate-file