Transaction isolation - INSERTS dependant on previous records values

自作多情 提交于 2019-12-13 15:36:32

问题


This question is related/came from discussion about another thing: What is the correct isolation level for Order header - Order lines transactions?

Imagine scenario where we have usual Orders_Headers and Orders_LineItems tables. Lets say also that we have a special business rules that say:

  1. Each order has Discount field which is calculated based on time passed from last order entered

  2. Each next order Discount field is calculated specially if there has been more than X order in last Y hours.

  3. Each next order Discount field is calculated specially if average frequency of last 10 orders was higher than x per minute.

  4. Each next order Discount field is calculated specially

Point here is to show that every Order is dependant on previous ones and isolation level is crucial.

We have a transaction (just logic of the code shown):

BEGIN TRANSACTION

INSERT INTO Order_Headers...

SET @Id = SCOPE_IDENTITY()

INSERT INTO Order_LineItems...(using @Id)

DECLARE @SomeVar INT

--just example to show selecting previous x orders
--needed to calculate Discount value for new Order
SELECT @SomeVar = COUNT(*) Order_Headers
WHERE ArbitraryCriteria

UPDATE Order_Headers
SET Discount= UDF(@SomeVar)
WHERE Id = @Id

COMMIT

END TRANSACTION

We also have another transaction to read orders:

SELECT TOP 10 * FROM Order_Headers
ORDER BY Id DESC

QUESTIONS

  1. Is SNAPSHOT isolation level for first transaction and READ COMMITED for second appropriate levels?

  2. Is there a better way of approaching CREATE/UPDATE transaction or is this the way to do it?


回答1:


The serializable option:

Using a pessimistic locking strategy by way of the updlock and serializable table hints to acquire a key range lock specified by the where criteria (backed by a supporting index to lock only the range necessary for the query):

declare @Id int, @SomeVar int;
begin tran;

  select @SomeVar = count(OrderDate) 
  from Order_Headers with (updlock,serializable) 
  where OrderDate >= '20170101';

  insert into Order_Headers (OrderDate, SomeVar)
    select sysdatetime(), @SomeVar;

  set @Id = scope_identity();

  insert into Order_LineItems (id,cols)
    select @Id, cols
    from @TableValuedParameter;

commit tran;

A good guide to the why and how of using the updlock and serializable table hints to lock a key range with a select, and why you need both, is covered in Sam Saffron''s upsert (update/insert) patterns.

Reference:

  • Documentation on serializable and other Table Hints - MSDN
  • Key-Range Locking - MSDN
  • SQL Server Isolation Levels: A Series - Paul White
  • Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask - Robert Sheldon
  • Isolation Level references curated by Brent Ozar



回答2:


The problem with snapshot is not about inserting/reading (which i assume you decided to use). Its about updates, that you should be a concerned.

Snapshot isolation levels are using row versioning. Which means any time you insert/update/delete row, those rows get duplicated in tempdb(version store, location for those kinds of rows), and increase its size by 14 bytes with a versioning tag so that your newly started transaction can read a row from the last committed transaction. Keep in mind that these resized rows will stay as they are until you rebuild the index.

This should be an indicator ,that if your table is really busy, your indexes will be defragmented much faster and it will add certain amount of % overhead on your temp.So keep that in mind.

What is even bigger concern here are updates, as i mentioned.

Any time you insert/delete/update row, you will get exclusive locks on those rows (object later),and since you snapshot is using row versioning, inserts from another transaction are adding exclusive locks on a NEW row, and that is not a problem.However if you try to update an existing row and session 2 tries to acquire X lock on that row, it will fail because session 1 already has X lock on it, and this is where you will get this message:

Read Committed and Serializable have covered these issues well, so you might wanna take that approach and test all solutions before you actually implement it. Remember all transactions will cause blocking on updates, and snapshot/read comitted snapshot will simply fail.

Me personally would`ve used read committed snapshot and altered procedure , to rerun in catch block N amount of times, but hey that has flaws as well !



来源:https://stackoverflow.com/questions/44930171/transaction-isolation-inserts-dependant-on-previous-records-values

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!