mysqldump

控制mysqldump导出的SQL文件的事务大小

放肆的年华 提交于 2019-11-28 17:41:42
背景 今天群里有人问mysqldump出来的insert语句,是否可以按每 10 row 一条insert语句的形式组织。 思考1: 参数--extended-insert 回忆过去所学: 我只知道有一对参数 --extended-insert(默认值) 表示使用长 INSERT ,多 row 在合并一起批量 INSERT,提高导入效率 --skip-extended-insert 一行一个的短INSERT 均不满足群友需求,无法控制按每 10 row 一条 insert 语句的形式组织。 思考2: “避免大事务” 之前一直没有考虑过这个问题。这个问题的提出,相信主要是为了“避免大事务”。所以满足 insert 均为小事务即可。 下面,我们来探讨一下以下问题: 什么是大事务? 那么 mysqldump 出来的 insert 语句可能是大事务吗? 什么是大事务? 定义:运行时间比较长,操作的数据比较多的事务我们称之为大事务。 大事务风险: 锁定太多的数据,造成大量的阻塞和锁超时,回滚所需要的时间比较长。 执行时间长,容易造成主从延迟。 undo log膨胀 避免大事务:我这里按公司实际场景,规定了,每次操作/获取数据量应该少于5000条,结果集应该小于2M mysqldump出来的SQL文件有大事务吗? 前提,MySQL 默认是自提交的,所以如果没有明确地开启事务,一条 SQL

How To Avoid Repair With Keycache?

早过忘川 提交于 2019-11-28 16:42:09
I have had some experience with optimizing the my.cnf file but my database has around 4 million records (MyISAM). I am trying to restore from a mysqldump but every time I do I eventually get the dreaded "Repair With Keycache", that may take days. Is there any way to get past this and let it roll as "Repair By Sorting"? I have 2GB RAM, Dual Cores, lots of extra hard-drive space. Snip out of my.cnf: set-variable = max_connections=650 set-variable = key_buffer=256M set-variable = myisam_sort_buffer_size=64M set-variable = join_buffer=1M set-variable = record_buffer=1M set-variable = sort_buffer

Mysqldump: create column names for inserts when backing up

时间秒杀一切 提交于 2019-11-28 16:37:54
How do I instruct mysqldump to backup with column names in insert statements? In my case I didn’t a normal back up with insert sql’s resulting in LOCK TABLES `users` WRITE; /*!40000 ALTER TABLE `users` INSERT INTO `users` VALUES (1 structure. Now I went ahead and removed a column from the schema in users. After this when I run the backup sql’s I get a column number mismatch error . To fix this how do I go about instructing mysqldump to write column names too? Here is how I do it now mysqldump --host=${dbserver} --user=${dbusername} --password=${dbpassword} \ --no-create-db --no-create-info -

MySQL ERROR 1231 (42000):Variable 'character_set_client' can't be set to the value of 'NULL'

一曲冷凌霜 提交于 2019-11-28 16:33:06
I've a MySQL 5.0.84 running in a Slackware 13.0 Staging server and wanted to copy a single table to another server which was built to use Ubuntu 14.04 OS for some other testing. I've taken a mysqldump of that table and copied to the testing server . I get the following error when I try to restore the dump file. ERROR 1231 (42000):Variable 'character_set_client' can't be set to the value of 'NULL' Please help me how to fix this error. Thanks! I did some search in internet and fixed it finally. Added the following text at the beginning of the mysqldump file and the restore was successful. /*

MYSQL Dump only certain rows

帅比萌擦擦* 提交于 2019-11-28 16:27:15
I am trying to do a mysql dump of a few rows in my database. I can then use the dump to upload those few rows into another database. The code I have is working, but it dumps everything. How can I get mysqldump to only dump certain rows of a table? Here is my code: mysqldump --opt --user=username --password=password lmhprogram myResumes --where=date_pulled='2011-05-23' > test.sql Just fix your --where option. It should be a valid SQL WHERE clause, like: --where="date_pulled='2011-05-23'" You have the column name outside of the quotes. You need to quote the "where" clause. Try mysqldump --opt -

mysqldump & gzip commands to properly create a compressed file of a MySQL database using crontab

倖福魔咒の 提交于 2019-11-28 16:17:27
问题 I am having problems with getting a crontab to work. I want to automate a MySQL database backup. The setup: Debian GNU/Linux 7.3 (wheezy) MySQL Server version: 5.5.33-0+wheezy1(Debian) directories user, backup and backup2 have 755 permission The user names for MySQL db and Debian account are the same From the shell this command works mysqldump -u user -p[user_password] [database_name] | gzip > dumpfilename.sql.gz When I place this in a crontab using crontab -e * * /usr/bin/mysqldump -u user

Error importing SQL dump into MySQL: Unknown database / Can't create database

蹲街弑〆低调 提交于 2019-11-28 15:51:22
I'm confused how to import a SQL dump file. I can't seem to import the database without creating the database first in MySQL. This is the error displayed when database_name has not yet been created: username = username of someone with access to the database on the original server. database_name = name of database from the original server $ mysql -u username -p -h localhost database_name < dumpfile.sql Enter password: ERROR 1049 (42000): Unknown database 'database_name' If I log into MySQL as root and create the database, database_name mysql -u root create database database_name; create user

Clone MySQL database

☆樱花仙子☆ 提交于 2019-11-28 15:43:23
I have database on a server with 120 tables. I want to clone the whole database with a new db name and the copied data. Is there an efficient way to do this? $ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql $ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW> Like accepted answer but without .sql files: mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS> In case you use phpMyAdmin Select the database you

MySQL import database but ignore specific table

半城伤御伤魂 提交于 2019-11-28 15:38:53
问题 I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you. 回答1: mysqlimport is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql client with

How to use mysqldump for a portion of a table?

对着背影说爱祢 提交于 2019-11-28 15:34:29
So I can export only a table like this: mysqldump -u root -p db_name table_name > table_name.sql Is there any way to export only a portion of a table with mysqldump? For example, 0 - 1,000,000 rows, 1,000,000 - 2,000,000 rows, etc. Should I do this with mysqldump or a query? Neo mysqldump -uroot -p db_name table_name --where='id<1000000' or you can use SELECT * INTO OUTFILE 'data_path.sql' from table where id<100000 noisex mysqldump --skip-triggers --compact --no-create-info --user=USER --password=PASSWORD -B DATABASE --tables MY_TABLE --where='SOME_COLUMN>=xxxx' > out.sql The file dumped is