sql-tuning

Why does PostgresQL query performance drop over time, but restored when rebuilding index

不问归期 提交于 2019-12-20 09:01:06
问题 According to this page in the manual, indexes don't need to be maintained . However, we are running with a PostgresQL table that has a continuous rate of updates , deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We

Teradata SQL Optimization : NOT IN ( List ) , Col <> and IN LIST Optimization

戏子无情 提交于 2019-12-13 00:55:49
问题 I have queuries with a LOT of these situations Sel TB1.C1 TB2.C2, TB3.C4 Tb5.C5 where < Join conditions involving all tables TB1 through TB4 . Most are inner some are LOJ > where TB2.C2 NOT In ( List ) OR TB3.C5 <> 'string' OR Tb5.C8 NOT IN ( another long list ) Is there a better way to rewrite the filter conditions NOT IN ( List ) , Col <> and IN LIST Optimization some of the cols are selected some are not which are there in the sel Vs the filter condition. 来源: https://stackoverflow.com

MYSQL Query Performance - Searching by Distance

孤街醉人 提交于 2019-12-12 03:36:32
问题 I have the following MYSQL query which is running on a table with around 50,000 records. The query is returning records within a 20 mile radius and i'm using a bounding box in the where clause to narrow down the records. The query is sorted by distance and limited to 10 records as it will be used on a paginated page. The query is currently taking 0.0210 seconds to complete on average, but because the website is so busy I am looking for ways to improve this. The adverts table has around 20

Teradata SQL tuning with Sum and other aggregate functions

蹲街弑〆低调 提交于 2019-12-11 12:21:41
问题 I have a query like sel tb1.col1, tb4.col2, (case WHEN t4.col4 in (<IN LIST with more than 1000 values!>) then T4.Col7 Else "Flag" ) as "Dcol1", Sum ( tb3.col1), sum (tb3.col2 ), sum (tb2.col4) etc from tb1 left outer join tb2 <condition> LOJ tb3 <conditions> where tb1 condition and tb2 condition and tb3 condition group by ( case <condition> , colx.tb2,coly.tb1 Problem is TB3 and TB4 are HUGE fact table. The PI of the fact table is NOT included in the joins or Queries here. What I have done

Why does Postgres do a sequential scan where the index would return < 1% of the data?

 ̄綄美尐妖づ 提交于 2019-12-11 08:57:16
问题 I have 19 years of Oracle and MySQL experience (DBA and dev) and I am new to Postgres, so I may be missing something obvious. But I can not get this query to do what I want. NOTE: This query is running on an EngineYard Postgres instance. I am not immediately aware of the parameters it has set up. Also, columns applicable_type and status in the items table are of extension type citext. The following query can take in excess of 60 seconds to return rows: SELECT items.item_id, CASE when items

Oracle 11g - most efficient way of inserting multiple rows

痞子三分冷 提交于 2019-12-05 01:40:39
问题 I have an application which is running slowly over a WAN - we think the cause is multiple inserts into a table. I'm currently looking into more efficient ways to insert multiple rows at the same time. I found this method: INSERT ALL INTO MULTI_INSERT(VAL_1, VAL_2) VALUES (100,20) INTO MULTI_INSERT(VAL_1, VAL_2) VALUES (21,2) INTO MULTI_INSERT(VAL_1, VAL_2) VALUES (321,10) INTO MULTI_INSERT(VAL_1, VAL_2) VALUES (22,13) INTO MULTI_INSERT(VAL_1, VAL_2) VALUES (14,121) INTO MULTI_INSERT(VAL_1,

What are the performance implications of Oracle IN Clause with no joins?

依然范特西╮ 提交于 2019-12-04 19:55:42
I have a query in this form that will on average take ~100 in clause elements, and at some rare times > 1000 elements. If greater than 1000 elements, we will chunk the in clause down to 1000 (an Oracle maximum). The SQL is in the form of SELECT * FROM tab WHERE PrimaryKeyID IN (1,2,3,4,5,...) The tables I am selecting from are huge and will contain millions more rows than what is in my in clause. My concern is that the optimizer may elect to do a table scan (our database does not have up to date statistics - yeah - I know ...) Is there a hint I can pass to force the use of the primary key -

Why does PostgresQL query performance drop over time, but restored when rebuilding index

我的梦境 提交于 2019-12-02 18:39:53
According to this page in the manual, indexes don't need to be maintained . However, we are running with a PostgresQL table that has a continuous rate of updates , deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are searching based of an index, not the primary key (I've confirmed the index is being used, at least