How can I slow down a MySQL dump as to not affect current load on the server?

不羁岁月 提交于 2019-11-27 17:06:26
CA3LE

I have very large databases with tens of thousands of tables some of which have up to 5GB of data in 10's of millions of entries. (I run a popular service)... I've always had headaches when backing up these databases. Using default mysqldump it quickly spirals the server load out of control and locks up everything... affecting my users. Trying to stop the process can lead to crashed tables and lots of downtime during recovery of those tables.

I now use...

mysqldump -u USER -p --single-transaction --quick --lock-tables=false DATABASE | gzip > OUTPUT.gz

The mysqldump reference at dev.mysql.com even says...

To dump large tables, you should combine the --single-transaction option with --quick.

Says nothing about that being dependent on the database being InnoDB, mine are myISAM and this worked beautifully for me. Server load was almost completely unaffected and my service ran like a Rolex during the entire process. If you have large databases and backing them up is affecting your end user... this IS the solution. ;)

If using InnoDB tables, use the --single-transaction and --quick options for mysqldump

Use nice and gzip command to execute the command at lowest priority.

nice -n 10 ionice -c2 -n 7 mysqldump db-name | gzip > db-name.sql.gz 
Darryl at NetHosted

You can prefix the mysqldump command with the following:

ionice -c3 nice -n19 mysqldump ...

Which will run it at low IO and CPU priority so should limit the impact of it.

Note, this will only delay the time between MySQL executing. The scripts themselves will still be as intensive as they were before, just with a longer break between scripts.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!