locking

SQLite “Database is locked” error in multithreads application

时光怂恿深爱的人放手 提交于 2020-01-01 03:25:20
问题 There is a multithreads application, that works with large DB file (>600 Mb). "Database is locked" problem started when I added blob data, and started operate with >30 Kb of BLOB data per request. I think problem related to small HDD speed. It looks like SQLite deletes -journal file, one thread of my application got out of lock (because -journal file was applied and deleted), and other my thread want to do smth with DB, but SQLite still updates DB file... Sure, I can do a minute delays after

Should SELECT … FOR UPDATE always contain ORDER BY?

余生颓废 提交于 2020-01-01 02:29:09
问题 Let's say we execute... SELECT * FROM MY_TABLE FOR UPDATE ...and there is more than one row in MY_TABLE. Theoretically, if two concurrent transactions execute this statement, but it happens to traverse (and therefore lock) the rows in different order, a deadlock may occur. For example: Transaction 1: Locks row A. Transaction 2: Locks row B. Transaction 1: Attempts to lock row B and blocks. Transaction 2: Attempts to lock row A and deadlocks. The way to resolve this is to use ORDER BY to

SQL Server Latches and their indication of performance issues

无人久伴 提交于 2019-12-31 09:45:09
问题 I am trying to understand a potential performance issue with our database (SQL 2008) and in particular one performance counter, SQLServer:Latches\Total Latch Wait Time Total Latch Wait Time (ms). We are seeing a slow down in DB response times and the only correlating spike that I can match it with is a spike in Total Latch Wait Time and Latch Waits/sec. I am not seeing any particular bottleneck in disk IO, CPU usage or memory. The common explanation of a SQLServer latch is that it is a

How do intern'd strings behave between different threads and classloarders?

大憨熊 提交于 2019-12-31 05:19:09
问题 In java's documentation it says that in the below example, the condition will be true: String a = new String("ABC"); String b = new String("ABC"); if (a.intern() == b.intern()) { .... } I wanted to know, if that is still true when considering that a and b are defined in different Threads , or even different ClassLoaders ? This question rose when I needed an ability to synchronize a block that loads a certain configuration based on an entity's name, so I wanted to do something like:

When a lock holds a non-final object, can the object's reference still be changed by another thread?

倾然丶 夕夏残阳落幕 提交于 2019-12-31 02:33:53
问题 When an object needs to be synchronized, the IDE complains if it's not set non-final (because its reference isn't persistent): private static Object myTable; .... synchronized(myTable){ //IDE complains! //access myTable here... } We all know the IDE complains to prevent another thread from entering the guarded block if the thread holding the lock changes the non-final object's references. But could a synchronized object's reference also be changed by another thread B while thread A holds the

Why are lock hints needed on an atomic statement?

北城余情 提交于 2019-12-31 01:53:06
问题 Question What is the benefit of applying locks to the below statement? Similarly, what issue would we see if we didn't include these hints? i.e. Do they prevent a race condition, improve performance, or maybe something else? Asking as perhaps they're included to prevent some issue I've not considered rather than the race condition I'd assumed. NB: This is an overflow from a question asked here: SQL Threadsafe UPDATE TOP 1 for FIFO Queue The Statement In Question WITH nextRecordToProcess AS (

Are basic arithmetic operations in C# atomic

随声附和 提交于 2019-12-31 00:43:14
问题 Are the basic arithmetic operations Thread safe? For example, if there is ++ operation on a global variable, which will be modified from different threads, is it necessary to a lock around it? For example void MyThread() // can have many running instances { aGlobal++; } or should it be void MyThread() { lock( lockerObj) { aGlobal++; } } 回答1: The spec sums it up very well. Section 5.5, "Atomicity of variable references": Reads and writes of the following data types are atomic: bool, char, byte

HttpApplicationState - Why does Race condition exist if it is thread safe?

醉酒当歌 提交于 2019-12-30 11:20:53
问题 I just read an article that describes how HttpApplicationState has AcquireRead() / AcquireWrite() functions to manage concurrent access. It continues to explain, that in some conditions however we need to use an explict Lock() and Unlock() on the Application object to avoid a Race condition. I am unable to understand why a race condition should exist for Application state if concurrent access is implicitly handled by the object. Could someone please explain this to me ? Why would I ever need

Can address space be recycled for multiple calls to MapViewOfFileEx without chance of failure?

廉价感情. 提交于 2019-12-30 10:26:41
问题 Consider a complex, memory hungry, multi threaded application running within a 32bit address space on windows XP. Certain operations require n large buffers of fixed size, where only one buffer needs to be accessed at a time. The application uses a pattern where some address space the size of one buffer is reserved early and is used to contain the currently needed buffer. This follows the sequence: (initial run) VirtualAlloc -> VirtualFree -> MapViewOfFileEx (buffer changes) UnMapViewOfFile -

spin_lock on non-preemtive linux kernels

孤人 提交于 2019-12-30 10:10:22
问题 I read that on a system with 1 CPU and non preemtive linux kernel (2.6.x) a spin_lock call is equivalent to an empty call, and thus implemented that way. I can't understand that: shouldn't it be equivalent to a sleep on a mutex? Even on non-preemtive kernels interrupt handlers may still be executed for example or I might call a function that would put the original thread to sleep. So it's not true that an empty spin_lock call is "safe" as it would be if it was implemented as a mutex. Is there