I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, \"for each X that hasn\'t had a Y created
I took a crack at writing a decorator to use Postgres advisory locking similar to what erydo alluded to in his comment.
It's not very pretty, but seems to work correctly. This is with SQLAlchemy 0.9.7 under Python 2.7.
from functools import wraps
from sqlalchemy import select, func
from my_db_module import Session # SQLAlchemy ORM scoped_session
def pg_locked(key):
def decorator(f):
@wraps(f)
def wrapped(*args, **kw):
session = db.Session()
try:
acquired, = session.execute(select([func.pg_try_advisory_lock(key)])).fetchone()
if acquired:
return f(*args, **kw)
finally:
if acquired:
session.execute(select([func.pg_advisory_unlock(key)]))
return wrapped
return decorator
@app.task
@pg_locked(0xdeadbeef)
def singleton_task():
# only 1x this task can run at a time
pass
(Would welcome any comments on ways to improve this!)