问题
Do I understand correctly that table/row lock hints are being used for pessimistic transaction (TX) isolation models of concurrency ONLY?
In other words, when can table/row lock hints be used during engagement of optimistic TX isolation provided by SQL Server (2005 and higher)?
When one would need pessimistic TX isolation levels/hints in SQL Server2005+ if the later provides built-in optimistic (aka snapshot aka versioning) concurrency isolation?
I did read that pessimistic options are legacy and are not needed anymore, though I am in doubt.
Also, having optimistic (aka snapshot aka versioning) TX isolation levels built-in SQL Server2005+, when one would need to manually code for optimistic concurrency features?
The last question is inspired by having read:
- "Optimistic Concurrency in SQL Server" (September 28, 2007)
describing custom coding to provide versioning in SQL Server.
回答1:
Optimistic concurrency requires more resources and is more expensive when the conflict occurs.
Two sessions can read and modify the values and the conflict only occurs when they try to apply their changes simultaneously. This means that in case of the concurrent update both values should be stored somewhere (which of course requires resources).
Also, when a conflict occurs, usually the whole transaction should be rolled back or the cursor refetched, which is expensive too.
Pessimistic concurrency model uses locking, thus downgrading concurrency but improving performance.
In case of two concurrent tasks, it may be cheaper for the second task to wait for a lock to release than spending CPU
time and disk I/O
on two simultaneous works and then yet more on rolling back the less fortunate work and redoing it.
Say, you have a query like this:
UPDATE mytable
SET myvalue = very_complex_function(@range)
WHERE rangeid = @range
, with very_complex_function
reading some data from mytable
itself. In other words, this query transforms a subset of mytable
sharing the value of range
.
Now, when two functions work on the same range, there may be two scenarios:
Pessimistic: the first query locks, the second query waits for it. The first query completes in
10
seconds, the second one does too. Total:20
seconds.Optimistic: both queries work independently (on the same input). This shares
CPU
time between them plus some overhead on switching. They should keep their intermediate data somewhere, so the data is stored twice (which implies twiceI/O
or memory). Let's say both complete almost at the same time, in15
seconds.But when it's time to commit the work, the second query will conflict and will have to rollback its changes (say, it takes the same
15
seconds). Then it needs to reread the data again and do the work again, with the new set of data (10
seconds).As a result, both queries complete later than with a pessimistic locking:
15
and40
seconds vs.10
and20
.
When one would need pessimistic TX isolation levels/hints in SQL Server2005+ if the later provides built-in optimistic (aka snapshot aka versioning) concurrency isolation?
Optimistic isolation levels are, well, optimistic. You should not use them when you expect high contention on your data.
BTW, optimistic isolation (for the read queries) was available in SQL Server 2000
too.
回答2:
I have a detailed answer here: Developing Modifications that Survive Concurrency
回答3:
I think there's a bit confusion over terminology here.
The technique of optimistic locking/optimistic concurrency/... is a programming technique used to avoid the following scenario :
- start transaction
- read data, setting a "read" lock on it to prevent any deletes/modifications to our data
- display data on user's screen
- await user input, lock remains active
- keep awaiting user input, lock still preventing any writes/modifications
- user input never comes (for whatever reason)
- transaction times out (and this is usually not very rapidly, as the user must be given reasonable time to enter his input).
Optimistic locking replaces this with the following:
- start transaction READ
- read data, setting a "read" lock on it to prevent any deletes/modifications to our data
- end transaction READ, releasing the read lock just set
- display data on user's screen
- await user input, but data can be modified/deleted meanwhile by other transactions
- user input arrives
- start transaction WRITE
- verify that the data has remained unaltered, raising an exception if it has
- apply user updates
- end transaction WRITE
So the single "user transaction" to go fetch some data, and change and update them, consists of two distinct "database transactions". What is usually called "isolation levels" applies to those database transactions. The "optimistic locking" that you refer to applies to the "user transaction".
The matter is further complicated in that, broadly speaking, two completely distinct strategies are possible for the "isolating the database transactions part" :
- MVCC
- 2-phase locking
I think the "snapshot versioning isolation level" means that the MVCC technique (well, one of its various possible variations) is being used for the database transaction. The other commonly known isolation levels apply more to transaction isolation using 2PL as the serialization(/isolation) technique. (And mixing them up can get messy ...)
来源:https://stackoverflow.com/questions/4088430/when-to-prefer-pessimistic-model-of-transaction-isolation-over-optimistic-one