Conditional aggregation performance

后端 未结 2 1279
-上瘾入骨i
-上瘾入骨i 2020-12-03 01:45

Let us have the following data

 IF OBJECT_ID(\'dbo.LogTable\', \'U\') IS NOT NULL  DROP TABLE dbo.LogTable

 SELECT TOP 100000 DATEADD(day, ( ABS(CHECKSUM(NE         


        
相关标签:
2条回答
  • 2020-12-03 02:04

    Here's my example where subqueries on large tables were extremely slow (around 40-50sec) and I was given the advice to rewrite the query with FILTER (Conditional Aggregation) which sped it up to 1sec. I was amazed.

    Now I always use FILTER Conditional Aggregation because you only join on the large tables just once, and all the retrieval is done with FILTER. It's a bad idea to sub-select on large tables.

    Thread: SQL Performance Issues with Inner Selects in Postgres for tabulated report

    I needed a tabulated report, as follows,

    Example (easy flat stuff first, then the complicated tabulated stuff):

    RecallID | RecallDate | Event |..| WalkAlone | WalkWithPartner |..| ExerciseAtGym
    256      | 10-01-19   | Exrcs |..| NULL      | NULL            |..| yes
    256      | 10-01-19   | Walk  |..| yes       | NULL            |..| NULL
    256      | 10-01-19   | Eat   |..| NULL      | NULL            |..| NULL
    257      | 10-01-19   | Exrcs |..| NULL      | NULL            |..| yes
    

    My SQL had Inner Selects for the tabulated answer-based columns, and looked like this:

    select 
    -- Easy flat stuff first
    r.id as recallid, r.recall_date as recalldate, ... ,
    
    -- Example of Tabulated Columns:
    (select l.description from answers_t ans, activity_questions_t aq, lookup_t l 
    where l.id=aq.answer_choice_id and aq.question_id=13 
    and aq.id=ans.activity_question_id and aq.activity_id=27 and ans.event_id=e.id) 
         as transportationotherintensity,
    (select l.description from answers_t ans, activity_questions_t aq, lookup_t l
    where l.id=66 and l.id=aq.answer_choice_id and aq.question_id=14
    and aq.id=ans.activity_question_id and ans.event_id=e.id) 
         as commutework,
    (select l.description from answers_t ans, activity_questions_t aq, lookup_t l
    where l.id=67 and l.id=aq.answer_choice_id and aq.question_id=14 and aq.id=ans.activity_question_id and ans.event_id=e.id) 
         as commuteschool,
    (select l.description from answers_t ans, activity_questions_t aq, lookup_t l
    where l.id=95 and l.id=aq.answer_choice_id and aq.question_id=14 and aq.id=ans.activity_question_id and ans.event_id=e.id) 
         as dropoffpickup,
    

    The performance was horrible. Gordon Linoff recommended the one-time Join on the large table ANSWERS_T with FILTER as appropriate on all the tabulated Selects. That sped it up to 1 sec.

    select ans.event_id,
           max(l.description) filter (where aq.question_id = 13 and aq.activity_id = 27) as transportationotherintensity
           max(l.description) filter (where l.id = 66 and aq.question_id = 14 and aq.activity_id = 67) as commutework,
           . . .
    from activity_questions_t aq join
         lookup_t l 
         on l.id = aq.answer_choice_id join
         answers_t ans
         on aq.id = ans.activity_question_id
    group by ans.event_id
    
    0 讨论(0)
  • 2020-12-03 02:08

    Short summary

    • Performance of subqueries method depends on the data distribution.
    • Performance of conditional aggregation does not depend on the data distribution.

    Subqueries method can be faster or slower than conditional aggregation, it depends on the data distribution.

    Naturally, if the table has a suitable index, then subqueries are likely to benefit from it, because index would allow to scan only the relevant part of the table instead of the full scan. Having a suitable index is unlikely to significantly benefit the Conditional aggregation method, because it will scan the full index anyway. The only benefit would be if the index is narrower than the table and engine would have to read fewer pages into memory.

    Knowing this you can decide which method to choose.


    First test

    I made a larger test table, with 5M rows. There were no indexes on the table. I measured the IO and CPU stats using SQL Sentry Plan Explorer. I used SQL Server 2014 SP1-CU7 (12.0.4459.0) Express 64-bit for these tests.

    Indeed, your original queries behaved as you described, i.e. subqueries were faster even though the reads were 3 times higher.

    After few tries on a table without an index I rewrote your conditional aggregate and added variables to hold the value of DATEADD expressions.

    Overall time became significantly faster.

    Then I replaced SUM with COUNT and it became a little bit faster again.

    After all, conditional aggregation became pretty much as fast as subqueries.

    Warm the cache (CPU=375)

    SELECT -- warm cache
        COUNT(*) AS all_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Subqueries (CPU=1031)

    SELECT -- subqueries
    (
        SELECT count(*) FROM LogTable 
    ) all_cnt, 
    (
        SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,-1,GETDATE())
    ) last_year_cnt,
    (
        SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,-10,GETDATE())
    ) last_ten_year_cnt
    OPTION (RECOMPILE);
    

    Original conditional aggregation (CPU=1641)

    SELECT -- conditional original
        COUNT(*) AS all_cnt,
        SUM(CASE WHEN datesent > DATEADD(year,-1,GETDATE())
                 THEN 1 ELSE 0 END) AS last_year_cnt,
        SUM(CASE WHEN datesent > DATEADD(year,-10,GETDATE())
                 THEN 1 ELSE 0 END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Conditional aggregation with variables (CPU=1078)

    DECLARE @VarYear1 datetime = DATEADD(year,-1,GETDATE());
    DECLARE @VarYear10 datetime = DATEADD(year,-10,GETDATE());
    
    SELECT -- conditional variables
        COUNT(*) AS all_cnt,
        SUM(CASE WHEN datesent > @VarYear1
                 THEN 1 ELSE 0 END) AS last_year_cnt,
        SUM(CASE WHEN datesent > @VarYear10
                 THEN 1 ELSE 0 END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Conditional aggregation with variables and COUNT instead of SUM (CPU=1062)

    SELECT -- conditional variable, count, not sum
        COUNT(*) AS all_cnt,
        COUNT(CASE WHEN datesent > @VarYear1
                 THEN 1 ELSE NULL END) AS last_year_cnt,
        COUNT(CASE WHEN datesent > @VarYear10
                 THEN 1 ELSE NULL END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Based on these results my guess is that CASE invoked DATEADD for each row, while WHERE was smart enough to calculate it once. Plus COUNT is a tiny bit more efficient than SUM.

    In the end, conditional aggregation is only slightly slower than subqueries (1062 vs 1031), maybe because WHERE is a bit more efficient than CASE in itself, and besides, WHERE filters out quite a few rows, so COUNT has to process less rows.


    In practice I would use conditional aggregation, because I think that number of reads is more important. If your table is small to fit and stay in the buffer pool, then any query will be fast for the end user. But, if the table is larger than available memory, then I expect that reading from disk would slow subqueries significantly.


    Second test

    On the other hand, filtering the rows out as early as possible is also important.

    Here is a slight variation of the test, which demonstrates it. Here I set the threshold to be GETDATE() + 100 years, to make sure that no rows satisfy the filter criteria.

    Warm the cache (CPU=344)

    SELECT -- warm cache
        COUNT(*) AS all_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Subqueries (CPU=500)

    SELECT -- subqueries
    (
        SELECT count(*) FROM LogTable 
    ) all_cnt, 
    (
        SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,100,GETDATE())
    ) last_year_cnt
    OPTION (RECOMPILE);
    

    Original conditional aggregation (CPU=937)

    SELECT -- conditional original
        COUNT(*) AS all_cnt,
        SUM(CASE WHEN datesent > DATEADD(year,100,GETDATE())
                 THEN 1 ELSE 0 END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Conditional aggregation with variables (CPU=750)

    DECLARE @VarYear100 datetime = DATEADD(year,100,GETDATE());
    
    SELECT -- conditional variables
        COUNT(*) AS all_cnt,
        SUM(CASE WHEN datesent > @VarYear100
                 THEN 1 ELSE 0 END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Conditional aggregation with variables and COUNT instead of SUM (CPU=750)

    SELECT -- conditional variable, count, not sum
        COUNT(*) AS all_cnt,
        COUNT(CASE WHEN datesent > @VarYear100
                 THEN 1 ELSE NULL END) AS last_ten_year_cnt
    FROM LogTable
    OPTION (RECOMPILE);
    

    Below is a plan with subqueries. You can see that 0 rows went into the Stream Aggregate in the second subquery, all of them were filtered out at the Table Scan step.

    As a result, subqueries are again faster.

    Third test

    Here I changed the filtering criteria of the previous test: all > were replaced with <. As a result, the conditional COUNT counted all rows instead of none. Surprise, surprise! Conditional aggregation query took same 750 ms, while subqueries became 813 instead of 500.

    Here is the plan for subqueries:

    Could you give me an example, where conditional aggregation notably outperforms the subquery solution?

    Here it is. Performance of subqueries method depends on the data distribution. Performance of conditional aggregation does not depend on the data distribution.

    Subqueries method can be faster or slower than conditional aggregation, it depends on the data distribution.

    Knowing this you can decide which method to choose.


    Bonus details

    If you hover the mouse over the Table Scan operator you can see the Actual Data Size in different variants.

    1. Simple COUNT(*):

    1. Conditional aggregation:

    1. Subquery in test 2:

    1. Subquery in test 3:

    Now it becomes clear that the difference in performance is likely caused by the difference in the amount of data that flows through the plan.

    In case of simple COUNT(*) there is no Output list (no column values are needed) and data size is smallest (43MB).

    In case of conditional aggregation this amount doesn't change between tests 2 and 3, it is always 72MB. Output list has one column datesent.

    In case of subqueries, this amount does change depending on the data distribution.

    0 讨论(0)
提交回复
热议问题