Mysql Optimization suggestion for large table

[亡魂溺海] 提交于 2019-12-02 14:07:53

None of the suggestions so far will help much, because...

  • Covering index: That is only slightly smaller than the table, so it is slightly faster.
  • KEY(tran_date) -- a waste; it is better to use the PK, which starts with tran_date.
  • PARTITIONing -- No. That is likely to be slower.
  • Removing tran_date (or otherwise rearranging the PK) -- This will hurt. The filtering (WHERE) is on tran_date; it is usually best to have that first.
  • So, why was COUNT(*) fast? Well, start by looking at the EXPLAIN. It will show that it used KEY(tran_date) instead of scanning the table. Less data to scan, hence faster.

The real issue is that you have millions of rows to scan, it takes time to touch millions of rows.

How to speed it up? Create and maintain a Summary table . Then query that table (with thousands of rows) instead of the original table (millions of rows). Total count is SUM(counts); total sum is SUM(sums); average is SUM(sums)/SUM(counts), etc.

For this query:

select location_id, dept_id,
       round(sum(sales), 0), sum(qty), count(distinct tran_id),
       now()
from tran_sales
where tran_date <= '2016-12-24'
group by location_id, dept_id;

There is not much you can do. One attempt would be a covering index: (tran_date, location_id, dept_id, sales, qty), but I don't think that will help much.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!