locking

Application area of lock-striping

拥有回忆 提交于 2019-12-23 02:41:13
问题 The ConcurrentHashMap of JDK uses a lock-striping technique. It is a nice idea to minimize locking overhead. Are there any other libraries or tools that take advantage of it? For example, does database engine use it? If the technique is not so much useful in other areas, what is the limitation of it? 回答1: Lock striping is useful when there is a way of breaking a high contention lock into multiple locks without compromising data integrity. If this is possible or not should take some thought

Table without lock won't query unless nolock hint included

妖精的绣舞 提交于 2019-12-23 02:38:48
问题 Microsoft SQL Server 2008 R2 I have a table that currently can not be queried on but seems to not have a lock against it. Doesn't return: SELECT * FROM myTable Does return: SELECT * FROM myTable with (nolock) Inserts into the table also fail. When I run sp_lock , I don't find any instances of locks on myTable . When I run the "Resource Locking Statistics by Object" report, I don't see any locks for myTable . What other possibilities could there be that would keep a table from being acted upon

lock file between C and php

空扰寡人 提交于 2019-12-23 01:57:08
问题 Although the title mentions file, it doesn't have to be a file. Any locking mechanism would do. Here is the situation: I have a daemon process written in C, and a web page in php. I'd like to have a way of mutual locking so that under certain situation, the C daemon would lock a file and php detects the situation and tells the client side that the system is busy. Is there an easy way of doing this? Thanks, 回答1: flock does it properly. In your PHP script, use a non-blocking lock : $fd = fopen(

Synchronization issue using Python's multiprocessing module

我怕爱的太早我们不能终老 提交于 2019-12-23 00:52:27
问题 When I run the following code in from Python's multiprocessing module page: from multiprocessing import Process, Lock def f(l, i): l.acquire() print 'hello world', i l.release() if __name__ == '__main__': lock = Lock() for num in range(10): Process(target=f, args=(lock, num)).start() Sometimes I get unordered output such as: hello world 0 hello world 1 hello world 2 hello world 4 hello world 3 hello world 6 hello world 5 hello world 7 hello world 8 hello world 9 Note that 4 is printed before

Synchronization issue using Python's multiprocessing module

ぐ巨炮叔叔 提交于 2019-12-23 00:52:14
问题 When I run the following code in from Python's multiprocessing module page: from multiprocessing import Process, Lock def f(l, i): l.acquire() print 'hello world', i l.release() if __name__ == '__main__': lock = Lock() for num in range(10): Process(target=f, args=(lock, num)).start() Sometimes I get unordered output such as: hello world 0 hello world 1 hello world 2 hello world 4 hello world 3 hello world 6 hello world 5 hello world 7 hello world 8 hello world 9 Note that 4 is printed before

How do I lock certain SQL rows while running a process on them?

怎甘沉沦 提交于 2019-12-22 17:56:05
问题 My work has a financial application, written in VB.NET with SQL , that several users can be working on at the same time. At some point, one user might decide to Post the batch of entries that they (and possibly other people) are currently working on. Obviously, I no longer want any other users to add , edit , or delete entries in that batch after the Post process has been initiated. I have already seen that I can lock all data by opening the SQL transaction the moment the Post process starts,

Non-intrusively unlock file on Windows

删除回忆录丶 提交于 2019-12-22 17:55:28
问题 Is there a way to unlock a file on Windows with a Python script? The file is exclusively locked by another process. I need a solution without killing or interupting the locking process. I already had a look at portalocker, a portable locking implementation. But this needs a file handle to unlock, which I can not get, as the file is already locked by the locking process. If there is no way, could someone lead me to the Windows API doc which describes the problem further? 回答1: Anything you do

File locking - read then write whilst locked

痴心易碎 提交于 2019-12-22 17:51:50
问题 I need to be able to open a file to read it, maintaining a lock that denies other instances of the same application write access until I have written the amended file back to disk. The file is in a shared location on a network, and the app instances can be on any machine on the network. I have tried using a FileStream with FileAccess.ReadWrite and FileShare.Read , with a Streamreader to read and then a StreamWriter (on the same FileStream ) to write, but this corrupts the file. Other

FileChannel & RandomAccessFile don't seem to work

痞子三分冷 提交于 2019-12-22 12:27:27
问题 To put it simple: a swing app that uses sqlitejdbc as backend. Currently, there's no problem launching multiple instances that work with the same database file. And there should be. The file is locked (can't delete it while the app is running) so the check should be trivial. Turns out not. File f = new File("/path/to/file/db.sqlite"); FileChannel channel = new RandomAccessFile(f, "rw").getChannel(); System.out.println(channel.isOpen()); System.out.println(channel.tryLock()); results in true

What is the lowest byte usage value for a monitor lock?

我的未来我决定 提交于 2019-12-22 12:23:52
问题 To use intrinsic locking in Java you do Object o = new Object() ... sychronized (o) { ... } So one monitor already requires one Object i.e. 8 bytes or 16 bytes for 64bit (or 12 bytes for compressed ops and 64bit). Now assume you want to use lots of these monitors e.g. for an array which one can synchronize over certain areas and that has better concurrency (entry based) than Collections.synchronizedList . Then what is the most efficient way to implement this? Could I somehow use 2 nested