Why is PostgreSQL array access so much faster in C than in PL/pgSQL?

前端 未结 2 386
自闭症患者
自闭症患者 2020-12-08 02:42

I have a table schema which includes an int array column, and a custom aggregate function which sums the array contents. In other words, given the following:



        
相关标签:
2条回答
  • 2020-12-08 03:25

    PL/pgSQL excels as server-side glue for SQL elements. Procedural elements and lots of assignments are not among its strengths. Assignments, tests or looping are comparatively expensive and only warranted if they help to take shortcuts one could not achieve with just SQL. The same logic implemented in C will always be faster, but you seem to be well aware of that ...

    Most of the time, pure SQL solutions are faster. Can you compare this simple, equivalent solution with your test setup?

    SELECT array_agg(a + b)
    FROM  (
       SELECT unnest('{1, 2, 3 }'::int[]) AS a
             ,unnest('{4, 5, 6 }'::int[]) AS b
       ) x
    

    You can wrap this into a simple SQL function or, for better performance, integrate it directly into your big query. Like this:

    SELECT tbl_id, array_agg(a + b)
    FROM  (
       SELECT tbl_id
             ,unnest(array1) AS a
             ,unnest(array2) AS b
       FROM   tbl
       ORDER  BY tbl_id
       ) x
    GROUP  BY tbl_id;
    

    Note, that set returning functions only work in parallel in a SELECT if the number of returned rows is identical. I.e.: works only for arrays of equal length.

    It would also be a good idea to run the test with a current version PostgreSQL. 9.0 is a particularly unpopular release that hardly anybody uses (any more). That's even more true for the hopelessly outdated point release 9.0.2.

    You must at least update to the last point release (9.0.15 atm.) or, better yet, to the current version 9.3.2 to get many important bug and security fixes. Might be part of the explanation for the big difference in performance.

    Postgres 9.4

    • Performance improvements for array handling.

    And there is a cleaner solution for unnesting in parallel now:

    • Unnest multiple arrays in parallel
    0 讨论(0)
  • 2020-12-08 03:46

    Why?

    why is the C version so much faster?

    A PostgreSQL array is its self a pretty inefficient data structure. It can contain any data type and it's capable of being multi-dimensional, so lots of optimisations are just not possible. However, as you've seen it's possible to work with the same array much faster in C.

    That's because array access in C can avoid a lot of the repeated work involved in PL/PgSQL array access. Just take a look at src/backend/utils/adt/arrayfuncs.c, array_ref. Now look at how it's invoked from src/backend/executor/execQual.c in ExecEvalArrayRef. Which runs for each individual array access from PL/PgSQL, as you can see by attaching gdb to the pid found from select pg_backend_pid(), setting a breakpoint at ExecEvalArrayRef, continuing, and running your function.

    More importantly, in PL/PgSQL every statement you execute is run through the query executor machinery. This makes small, cheap statements fairly slow even allowing for the fact that they're pre-prepared. Something like:

    a := b + c
    

    is actually executed by PL/PgSQL more like:

    SELECT b + c INTO a;
    

    You can observe this if you turn debug levels high enough, attach a debugger and break at a suitable point, or use the auto_explain module with nested statement analysis. To give you an idea of how much overhead this imposes when you're running lots of tiny simple statements (like array accesses), take a look at this example backtrace and my notes on it.

    There is also a significant start-up overhead to each PL/PgSQL function invocation. It isn't huge, but it's enough to add up when it's being used as an aggregate.

    A faster approach in C

    In your case I would probably do it in C, as you have done, but I'd avoid copying the array when called as an aggregate. You can check for whether it's being invoked in aggregate context:

    if (AggCheckCallContext(fcinfo, NULL))
    

    and if so, use the original value as a mutable placeholder, modifying it then returning it instead of allocating a new one. I'll write a demo to verify that this is possible with arrays shortly... (update) or not-so-shortly, I forgot how absolute horrible working with PostgreSQL arrays in C is. Here we go:

    // append to contrib/intarray/_int_op.c
    
    PG_FUNCTION_INFO_V1(add_intarray_cols);
    Datum           add_intarray_cols(PG_FUNCTION_ARGS);
    
    Datum
    add_intarray_cols(PG_FUNCTION_ARGS)
    {
        ArrayType  *a,
               *b;
    
        int i, n;
    
        int *da,
            *db;
    
        if (PG_ARGISNULL(1))
            ereport(ERROR, (errmsg("Second operand must be non-null")));
        b = PG_GETARG_ARRAYTYPE_P(1);
        CHECKARRVALID(b);
    
        if (AggCheckCallContext(fcinfo, NULL))
        {
            // Called in aggregate context...
            if (PG_ARGISNULL(0))
                // ... for the first time in a run, so the state in the 1st
                // argument is null. Create a state-holder array by copying the
                // second input array and return it.
                PG_RETURN_POINTER(copy_intArrayType(b));
            else
                // ... for a later invocation in the same run, so we'll modify
                // the state array directly.
                a = PG_GETARG_ARRAYTYPE_P(0);
        }
        else 
        {
            // Not in aggregate context
            if (PG_ARGISNULL(0))
                ereport(ERROR, (errmsg("First operand must be non-null")));
            // Copy 'a' for our result. We'll then add 'b' to it.
            a = PG_GETARG_ARRAYTYPE_P_COPY(0);
            CHECKARRVALID(a);
        }
    
        // This requirement could probably be lifted pretty easily:
        if (ARR_NDIM(a) != 1 || ARR_NDIM(b) != 1)
            ereport(ERROR, (errmsg("One-dimesional arrays are required")));
    
        // ... as could this by assuming the un-even ends are zero, but it'd be a
        // little ickier.
        n = (ARR_DIMS(a))[0];
        if (n != (ARR_DIMS(b))[0])
            ereport(ERROR, (errmsg("Arrays are of different lengths")));
    
        da = ARRPTR(a);
        db = ARRPTR(b);
        for (i = 0; i < n; i++)
        {
                // Fails to check for integer overflow. You should add that.
            *da = *da + *db;
            da++;
            db++;
        }
    
        PG_RETURN_POINTER(a);
    }
    

    and append this to contrib/intarray/intarray--1.0.sql:

    CREATE FUNCTION add_intarray_cols(_int4, _int4) RETURNS _int4
    AS 'MODULE_PATHNAME'
    LANGUAGE C IMMUTABLE;
    
    CREATE AGGREGATE sum_intarray_cols(_int4) (sfunc = add_intarray_cols, stype=_int4);
    

    (more correctly you'd create intarray--1.1.sql and intarray--1.0--1.1.sql and update intarray.control. This is just a quick hack.)

    Use:

    make USE_PGXS=1
    make USE_PGXS=1 install
    

    to compile and install.

    Now DROP EXTENSION intarray; (if you already have it) and CREATE EXTENSION intarray;.

    You'll now have the aggregate function sum_intarray_cols available to you (like your sum(int4[]), as well as the two-operand add_intarray_cols (like your array_add).

    By specializing in integer arrays a whole bunch of complexity goes away. A bunch of copying is avoided in the aggregate case, since we can safely modify the "state" array (the first argument) in-place. To keep things consistent, in the case of non-aggregate invocation we get a copy of the first argument so we can still work with it in-place and return it.

    This approach could be generalised to support any data type by using the fmgr cache to look up the add function for the type(s) of interest, etc. I'm not particularly interested in doing that, so if you need it (say, to sum columns of NUMERIC arrays) then ... have fun.

    Similarly, if you need to handle dissimilar array lengths, you can probably work out what to do from the above.

    0 讨论(0)
提交回复
热议问题