transaction-isolation

How to Select UNCOMMITTED rows only in SQL Server?

蓝咒 提交于 2019-12-22 04:18:26
问题 I am working on DW project where I need to query live CRM system. The standard isolation level negatively influences performance. I am tempted to use no lock/transaction isolation level read uncommitted. I want to know how many of selected rows are identified by dirty read. 回答1: Maybe you can do this: SELECT * FROM T WITH (SNAPSHOT) EXCEPT SELECT * FROM T WITH (READCOMMITTED, READPAST) But this is inherently racy. 回答2: Why do you need to know that? You use TRANSACTION ISOLATION LEVER READ

Isolation levels in oracle [closed]

青春壹個敷衍的年華 提交于 2019-12-21 16:57:29
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 9 months ago . I would like to know different isolation levels with respect to commit, and would also like to know about row-level and table-level lock. 回答1: ANSI/ISO SQL defines four isolation levels: serializable repeatable read read committed read uncommitted According to Oracle's Database

Atomic UPDATE to increment integer in Postgresql

孤街浪徒 提交于 2019-12-21 04:56:35
问题 I'm trying to figure out if the query below is safe to use for the following scenario: I need to generate sequential numbers, without gaps. As I need to track many of them, I have a table holding sequence records, with a sequence integer column. To get the next sequence, I'm firing off the SQL statement below. WITH updated AS ( UPDATE sequences SET sequence = sequence + ? WHERE sequence_id = ? RETURNING sequence ) SELECT * FROM updated; My question is: is the query below safe when multiple

Transaction isolation - INSERTS dependant on previous records values

自作多情 提交于 2019-12-13 15:36:32
问题 This question is related/came from discussion about another thing: What is the correct isolation level for Order header - Order lines transactions? Imagine scenario where we have usual Orders_Headers and Orders_LineItems tables. Lets say also that we have a special business rules that say: Each order has Discount field which is calculated based on time passed from last order entered Each next order Discount field is calculated specially if there has been more than X order in last Y hours.

InnoDB locking for INSERT/UPDATE concurrent transactions

三世轮回 提交于 2019-12-13 03:05:32
问题 I'm looking to ensure isolation when multiple transactions may execute a database insert or update, where the old value is required for the process. Here is a MVP in python-like pseudo code, the default isolation level is assumed: sql('BEGIN') rows = sql('SELECT `value` FROM table WHERE `id`=<id> FOR UPDATE') if rows: old_value, = rows[0] process(old_value, new_value) sql('UPDATE table SET `value`=<new_value> WHERE `id`=<id>') else: sql('INSERT INTO table (`id`, `value`) VALUES (<id>, <new

Cassandra row level isolation

独自空忆成欢 提交于 2019-12-11 13:38:35
问题 I have a table created in cql : create table isolation_demo(key text,column1 text,column2 text,column3 text ,primary key(key,column1,column2)); I have 2 statement in a batch. update isolation_demo set column3 ='ABC' where key =1 and column1 =1 and column2=1; delete from isolation_demo where key =1 and column1 =2 and column2=2; here the both statements share same partition key. (key=1), but different clustering column values. Will these 2 statements be isolated? 回答1: These queries must be

How to set a Ruby on Rails 4+ app's default db isolation level

断了今生、忘了曾经 提交于 2019-12-11 10:56:12
问题 I want to make my application serialize every transaction by default. I'd then relax isolation based on performance measurements and knowing what data particular actions/transactions use and change. I doubt serializable by default would get into the framework, as it'd slow things down and be difficult to explain. But I don't want to deal with db corruption, and do want internally consistent aggregate calculations. For case-by-case isolation levels there is Rails postgresql how to set

In Django, how to achieve repeatable reads for a transaction?

风流意气都作罢 提交于 2019-12-10 02:04:28
问题 I have a function, that does multiple queries on the same dataset and I want to ensure all the queries would see exactly the same data. In terms of SQL, this means REPEATABLE READ isolation level for the databases that support it. I don't mind having higher level or even a complete lockdown if the database isn't capable. As far as I see, this isn't the case. I.e. if I run something like this code in one Python shell: with transaction.atomic(): for t in range(0, 60): print("{0}: {1}".format(t,

Prevent concurrent execution of SQL Server stored procedure with ado.net in Asp.net

孤街浪徒 提交于 2019-12-08 16:20:37
I want to prevent two users from executing the same stored procedure concurrently. If two asp.net requests come in to execute that stored procedure, then those should execute in serial manner one after another. SQL Server database and execution is handled by ado.net. Will any of below methods help to achieve this ? What is the most suitable method ? Is there any other ways to achieve the same ? Execute the stored procedure with ado.net transaction by setting isolation level to Serializable Use sp_getapplock inside stored procedure and release at the end Execute the stored procedure with ado

PostgreSQL generic handler for serialization failure

廉价感情. 提交于 2019-12-08 12:38:47
问题 This is a followup question from this one so I know I can use (blocking) LOCKs but I want to use predicate locks and serializable transaction isolation. What I'd like to have is a generic handler of serialization failures that would retry the function/query X number of times. As example, I have this: CREATE SEQUENCE account_id_seq; CREATE TABLE account ( id integer NOT NULL DEFAULT nextval('account_id_seq'), title character varying(40) NOT NULL, balance integer NOT NULL DEFAULT 0, CONSTRAINT