MySQL load data infile - acceleration?

…衆ロ難τιáo~ 提交于 2019-11-27 06:45:20
Jon Black

if you're using innodb and bulk loading here are a few tips:

sort your csv file into the primary key order of the target table : remember innodb uses clustered primary keys so it will load faster if it's sorted !

typical load data infile i use:

truncate <table>;

set autocommit = 0;

load data infile <path> into table <table>...

commit;

other optimisations you can use to boost load times:

set unique_checks = 0;
set foreign_key_checks = 0;
set sql_log_bin=0;

split the csv file into smaller chunks

typical import stats i have observed during bulk loads:

3.5 - 6.5 million rows imported per min
210 - 400 million rows per hour

This blog post is almost 3 years old, but it's still relevant and has some good suggestions for optimizing the performance of "LOAD DATA INFILE":

http://www.mysqlperformanceblog.com/2007/05/24/predicting-how-long-data-load-would-take/

InnoDB is a pretty good engine. However, it highly relies on being 'tuned'. One thing is that if your inserts are not in the order of increasing primary keys, innoDB can take a bit longer than MyISAM. This can easily be overcome by setting a higher innodb_buffer_pool_size. My suggestion is to set it at 60-70% of your total RAM on a dedicated MySQL machine.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!