Average stock history table

后端 未结 3 1863
时光说笑
时光说笑 2021-01-01 06:27

I have a table that tracks changes in stocks through time for some stores and products. The value is the absolute stock, but we only insert a new row when a change in stock

3条回答
  •  不思量自难忘°
    2021-01-01 06:48

    The special difficulty of this task: you cannot just pick data points inside your time range, but have to consider the latest data point before the time range and the earliest data point after the time range additionally. This varies for every row and each data point may or may not exist. Requires a sophisticated query and makes it hard to use indexes.

    You could use range types and operators (Postgres 9.2+) to simplify calculations:

    WITH input(a,b) AS (SELECT '2013-01-01'::date  -- your time frame here
                             , '2013-01-15'::date) -- inclusive borders
    SELECT store_id, product_id
         , sum(upper(days) - lower(days))                    AS days_in_range
         , round(sum(value * (upper(days) - lower(days)))::numeric
                        / (SELECT b-a+1 FROM input), 2)      AS your_result
         , round(sum(value * (upper(days) - lower(days)))::numeric
                        / sum(upper(days) - lower(days)), 2) AS my_result
    FROM (
       SELECT store_id, product_id, value, s.day_range * x.day_range AS days
       FROM  (
          SELECT store_id, product_id, value
               , daterange (day, lead(day, 1, now()::date)
                 OVER (PARTITION BY store_id, product_id ORDER BY day)) AS day_range 
          FROM   stock
          ) s
       JOIN  (
          SELECT daterange(a, b+1) AS day_range
          FROM   input
          ) x ON s.day_range && x.day_range
       ) sub
    GROUP  BY 1,2
    ORDER  BY 1,2;
    

    Note, I use the column name day instead of date. I never use basic type names as column names.

    In the subquery sub I fetch the day from the next row for each item with the window function lead(), using the built-in option to provide "today" as default where there is no next row.
    With this I form a daterange and match it against the input with the overlap operator &&, computing the resulting date range with the intersection operator *.

    All ranges here are with exclusive upper border. That's why I add one day to the input range. This way we can simply subtract lower(range) from upper(range) to get the number of days.

    I assume that "yesterday" is the latest day with reliable data. "Today" can still change in a real life application. Consequently, I use "today" (now()::date) as exclusive upper border for open ranges.

    I provide two results:

    • your_result agrees with your displayed results.
      You divide by the number of days in your date range unconditionally. For instance, if an item is only listed for the last day, you get a very low (misleading!) "average".

    • my_result computes the same or higher numbers.
      I divide by the actual number of days an item is listed. For instance, if an item is only listed for the last day, I return the listed value as average.

    To make sense of the difference I added the number of days the item was listed: days_in_range

    SQL Fiddle.

    Index and performance

    For this kind of data, old rows typically don't change. This would make an excellent case for a materialized view:

    CREATE MATERIALIZED VIEW mv_stock AS
    SELECT store_id, product_id, value
         , daterange (day, lead(day, 1, now()::date) OVER (PARTITION BY store_id, product_id
                                                           ORDER BY day)) AS day_range
    FROM   stock;
    

    Then you can add a GiST index which supports the relevant operator &&:

    CREATE INDEX mv_stock_range_idx ON mv_stock USING gist (day_range);
    

    Big test case

    I ran a more realistic test with 200k rows. The query using the MV was about 6 times as fast, which in turn was ~ 10x as fast as @Joop's query. Performance heavily depends on data distribution. An MV helps most with big tables and high frequency of entries. Also, if the table has columns that are not relevant to this query, a MV can be smaller. A question of cost vs. gain.

    I've put all solutions posted so far (and adapted) in a big fiddle to play with:

    SQL Fiddle with big test case.
    SQL Fiddle with only 40k rows - to avoid timeout on sqlfiddle.com

提交回复
热议问题