Is it good practice to use mkdir as file-based locking on linux?

早过忘川 提交于 2019-12-30 09:01:20

问题


I wanted to quickly implement some sort of locking in perl program on linux, which would be shareable between different processes.

So I used mkdir as an atomic operation, which returns 1 if the directory doesn't exist and 0 if it does. I remove the directory right after the critical section.

Now, it was pointed to me that it's not a good practice in general (independently on the language). I think it's quite OK, but I would like to ask your opinion.

edit: to show an example, my code looked something like this:

while (!mkdir "lock_dir") {wait some time}
critical section
rmdir "lock_dir"

回答1:


IMHO this is a very bad practice. What if the perl script which created the lock directory somehow got killed during the critical section? Another perl script waiting for the lock dir to be removed will wait forever, because it won't get removed by the script which originally created it. To use safe locking, use flock() on a lock file (see perldoc -f flock).




回答2:


This is fine until an unexpected failure (e.g. program crash, power failure) happens while the directory exists.

After this, the program will never run because the lock is locked forever (assuming the directory is on a persistent filesystem).

Normally I'd use flock with LOCK_EXCL instead.

Open a file for reading+writing, creating it if it doesn't exist. Then take the exclusive lock, if that fails (if you use LOCK_NB) then some other process has it locked.

After you've got the lock, you need to keep the file open.

The advantage of this approach is, if the process dies unexpected (for example, crash, is killed or the machine fails), the lock is automatically released.



来源:https://stackoverflow.com/questions/7208447/is-it-good-practice-to-use-mkdir-as-file-based-locking-on-linux

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!