Why are batch inserts/updates faster? How do batch updates work?

后端 未结 4 811
长情又很酷
长情又很酷 2020-11-30 21:11

Why are batch inserts faster? Is it because the connection and setup overhead for inserting a single row is the same for a set of rows? What other factors make batch inserts

相关标签:
4条回答
  • 2020-11-30 21:51

    In a batch updates, the database works against a set of data, in a row by row update it has to run the same command as may times as there are rows. So if you insert a million rows in a batch, the command is sent and processed once and in a row-by row update, it is sent and processed a million times. This is also why you never want to use a cursor in SQL Server or a correlated subquery.

    an example of a set-based update in SQL server:

    update mytable
    set myfield = 'test'
    where myfield is null
    

    This would update all 1 million records that are null in one step. A cursor update (which is how you would update a million rows in a non-batch fashion) would iterate through each row one a time and update it.

    The problem with a batch insert is the size of the batch. If you try to update too many records at once, the database may lock the table for the duration of the process, locking all other users out. So you may need to do a loop that takes only part of the batch at a time (but pretty much any number greater than one row at time will be faster than one row at a time) This is slower than updating or inserting or deleting the whole batch, but faster than row-by row operations and may be needed in a production environment with many users and little available downtime when users are not trying to see and update other records in the same table. The size of the batch depends greatly on the database structure and exactly what is happening (tables with triggers and lots of constraints are slower as are tables with lots of fields and so require smaller batches).

    0 讨论(0)
  • 2020-11-30 21:57

    Why are batch inserts faster?

    For numerous reasons, but the major three are these:

    • The query doesn't need to be reparsed.
    • The values are transmitted in one round-trip to the server
    • The commands are inside a single transaction

    Is it because the connection and setup overhead for inserting a single row is the same for a set of rows?

    Partially yes, see above.

    How do batch updates work?

    This depends on RDBMS.

    In Oracle you can transmit all values as a collection and use this collection as a table in a JOIN.

    In PostgreSQL and MySQL, you can use the following syntax:

    INSERT
    INTO    mytable
    VALUES 
            (value1),
            (value2),
            …
    

    You can also prepare a query once and call it in some kind of a loop. Usually there are methods to do this in a client library.

    Assuming the table has no uniqueness constraints, insert statements don't really have any effect on other insert statements in the batch. But, during batch updates, an update can alter the state of the table and hence can affect the outcome of other update queries in the batch.

    Yes, and you may or may not benefit from this behavior.

    I know that batch insert queries have a syntax where you have all the insert values in one big query. How do batch update queries look like?

    In Oracle, you use collection in a join:

    MERGE
    INTO    mytable
    USING   TABLE(:mycol)
    ON      …
    WHEN MATCHED THEN
    UPDATE
    SET     …
    

    In PostgreSQL:

    UPDATE  mytable
    SET     s.s_start = 1
    FROM    (
            VALUES
            (value1),
            (value2),
            …
            ) q
    WHERE   …
    
    0 讨论(0)
  • 2020-11-30 22:08

    I was looking for an answer on the same subject, about "bulk/batch" update. People often describe the problem by comparing it with insert clause with multiple value sets (the "bulk" part).

    INSERT INTO mytable (mykey, mytext, myint)
    VALUES 
      (1, 'text1', 11),
      (2, 'text2', 22),
      ...
    

    Clear answer was still avoiding me, but I found the solution here: http://www.postgresql.org/docs/9.1/static/sql-values.html

    To make it clear:

    UPDATE mytable
    SET 
      mytext = myvalues.mytext,
      myint = myvalues.myint
    FROM (
      VALUES
        (1, 'textA', 99),
        (2, 'textB', 88),
        ...
    ) AS myvalues (mykey, mytext, myint)
    WHERE mytable.mykey = myvalues.mykey
    

    It has the same property of being "bulk" aka containing alot of data with one statement.

    0 讨论(0)
  • 2020-11-30 22:14

    The other posts explain why bulk statements are faster and how to do it with literal values.

    I think it is important to know how to do it with placeholders. Not using placeholders may lead to gigantic command strings, to quoting/escaping bugs and thereby to applications that are prone to SQL injection.

    Bulk insert with placeholders in PostgreSQL >= 9.1

    To insert an arbitrary numbers of rows into table "mytable", consisting of columns "col1, "col2" and "col3", all in one got (one statement, one transaction):

    INSERT INTO mytable (col1, col2, col3)
     VALUES (unnest(?), unnest(?), unnest(?))
    

    You need to supply three arguments to this statement. The first one has to contain all the values for the first column and so on. Consequently, all the arguments have to be lists/vectors/arrays of equal length.

    Bulk update with placeholders in PostgreSQL >= 9.1

    Let's say, your table is called "mytable". It consists of the columns "key" and "value".

    update mytable 
      set value = data_table.new_value
      from 
        (select unnest(?) as key, unnest(?) as new_value) as data_table
      where mytable.key = data_table.key
    

    I know, this is not easy to understand. It looks like obfuscated SQL. On the other side: It works, it scales, it works without any string concatenation, it is safe and it is blazingly fast.

    You need to supply two arguments to this statement. The first one has to be a list/vector/array that contains all the values for column "key". Of course, the second one has to contain all the values for column "value".

    In case you hit size limits, you may have to look into COPY INTO ... FROM STDIN (PostgreSQL).

    0 讨论(0)
提交回复
热议问题