PostgreSQL: speed up SELECT query in table with millions of rows

后端 未结 3 1888
不思量自难忘°
不思量自难忘° 2020-12-30 16:49

I have a table with > 4.5 million rows and my SELECT query is far too slow for my needs.

The table is created with:

CREATE TABLE all_leg         


        
3条回答
  •  不知归路
    2020-12-30 17:03

    1. The very first you should change here is to remove composite primary key and use plain one-column one instead of this. This is because if you're going to use some columns index, it works the best with something like one column integer index which is here like a spine and allows your index to fetch fast rows you need to. When you have such big index like in your example, the planner may say that it will be faster for him to scan through whole table.

    2. Even if your index would be good enough to be used by planner, it may be dropped by ordering. I say that 'may be' as - as many things in sql - it depends on your actuall data in table, analyses, and so on. I'm not sure about Postgres but you may want to try to build another index on column used in order by or even to try composite index for (dep_dt, price_ct). Also you may try to put dep_dt to order by list to give a compiler a hint.

    3. Do you need all from this table? Using * vs id (for example) can also have a impact here.

    4. How unique values you have in dep_dt column? Sometimes planner can say that it may be more effective in making scan through whole table than by index because there is many non-unique values here.

    In summary, SQL querying is art of experimenting, as it all depends on current data (as planner is using statistics build by analyzer to guess optimal query plan). So it may even happen that when you have tuned query to table with thousand of rows, it may stop working when you reach millions.

提交回复
热议问题