Python: Lock a file

前端 未结 5 545
心在旅途
心在旅途 2020-12-14 13:37

I have a Python app running on Linux. It is called every minute from cron. It checks a directory for files and if it finds one it processes it - this can take several minute

相关标签:
5条回答
  • 2020-12-14 14:17

    Don't use cron for this. Linux has inotify, which can notify applications when a filesystem event occurs. There is a Python binding for inotify called pyinotify.

    Thus, you don't need to lock the file -- you just need to react to IN_CLOSE_WRITE events (i.e. when a file opened for writing was closed). (You also won't need to spawn a new process every minute.)

    An alternative to using pyinotify is incron which allows you to write an incrontab (very much in the same style as a crontab), to interact with the inotify system.

    0 讨论(0)
  • 2020-12-14 14:17

    what about manually creating an old-fashioned .lock-file next to the file you want to lock?

    just check if it’s there; if not, create it, if it is, exit prematurely. after finishing, delete it.

    0 讨论(0)
  • 2020-12-14 14:20

    I think fcntl.lockf is what you are looking for.

    0 讨论(0)
  • 2020-12-14 14:25

    After fumbling with many schemes, this works in my case. I have a script that may be executed multiple times simultaneously. I need these instances to wait their turn to read/write to some files. The lockfile does not need to be deleted, so you avoid blocking all access if one script fails before deleting it.

    import fcntl
    
    def acquireLock():
        ''' acquire exclusive lock file access '''
        locked_file_descriptor = open('lockfile.LOCK', 'w+')
        fcntl.lockf(locked_file_descriptor, fcntl.LOCK_EX)
        return locked_file_descriptor
    
    def releaseLock(locked_file_descriptor):
        ''' release exclusive lock file access '''
        locked_file_descriptor.close()
    
    lock_fd = acquireLock()
    
    # ... do stuff with exclusive access to your file(s)
    
    releaseLock(lock_fd)
    
    0 讨论(0)
  • 2020-12-14 14:25

    You're using the LOCK_NB flag which means that the call is non-blocking and will just return immediately on failure. That is presumably happening in the second process. The reason why it is still able to read the file is that portalocker ultimately uses flock(2) locks, and, as mentioned in the flock(2) man page:

    flock(2) places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock(2) and perform I/O on the file.

    To fix it you could use the fcntl.flock function directly (portalocker is just a thin wrapper around it on Linux) and check the returned value to see if the lock succeeded.

    0 讨论(0)
提交回复
热议问题