MySQL database optimization best practices

前端 未结 4 1318
终归单人心
终归单人心 2021-01-30 06:08

What are the best practices for optimizing a MySQL installation for best performance when handling somewhat larger tables (> 50k records with a total of around 100MB per table)?

4条回答
  •  渐次进展
    2021-01-30 06:23

    It's hard to broadbrush things, but a moderately high-level view is possible.

    • You need to evaluate read:write ratios. For tables with ratios lower than about 5:1, you will probably benefit from InnoDB because then inserts won't block selects. But if you aren't using transactions, you should change innodb_flush_log_at_trx_commit to 1 to get performance back over MyISAM.
    • Look at the memory parameters. MySQL's defaults are very conservative and some of the memory limits can be raised by a factor of 10 or more on even ordinary hardware. This will benefit your SELECTs rather than INSERTs.
    • MySQL can log things like queries that aren't using indices, as well as queries that just take too long (user-defineable).
    • The query cache can be useful, but you need to instrument it (i.e. see how much it is used). Cacti can do that; as can Munin.
    • Application design is also important:
      • Lightly caching frequently fetched but smallish datasets will have a big difference (i.e. cache lifetime of a few seconds).
      • Don't re-fetch data that you already have to hand.
      • Multi-step storage can help with a high volume of inserts into tables that are also busily read. The basic idea is that you can have a table for ad-hoc inserts (INSERT DELAYED can also be useful), but a batch process to move the updates within MySQL from there to where all the reads are happening. There are variations of this.
    • Don't forget that perspective and context are important, too: what you might think is a long time for an UPDATE to happen might actually be quite trivial if that "long" update only happens once a day.

提交回复
热议问题