I have multiple threads running the same process that need to be able to to notify each other that something should not be worked on for the next n seconds its not the end o
The OP is using python 2.7 but if you're using python 3, ExpiringDict
mentioned in the accepted answer is currently, well, expired. The last commit to the github repo was June 17, 2017 and there is an open issue that it doesn't work with Python 3.5
As of September 1, 2020, there is a more recently maintained project cachetools.
pip install cachetools
from cachetools import TTLCache
cache = TTLCache(maxsize=10, ttl=360)
cache['apple'] = 'top dog'
...
>>> cache['apple']
'top dog'
... after 360 seconds...
>>> cache['apple']
KeyError exception thrown
ttl
is the time to live in seconds.
I know this is a little old, but for those who are interested in no third-party dependencies, this is a minor wrapper around the builtin functools.lru_cache
(I noticed Javier's similar answer after writing this, but figured I post it anyway since this doesn't require Django):
import functools
import time
def time_cache(max_age, maxsize=128, typed=False):
"""Least-recently-used cache decorator with time-based cache invalidation.
Args:
max_age: Time to live for cached results (in seconds).
maxsize: Maximum cache size (see `functools.lru_cache`).
typed: Cache on distinct input types (see `functools.lru_cache`).
"""
def _decorator(fn):
@functools.lru_cache(maxsize=maxsize, typed=typed)
def _new(*args, __time_salt, **kwargs):
return fn(*args, **kwargs)
@functools.wraps(fn)
def _wrapped(*args, **kwargs):
return _new(*args, **kwargs, __time_salt=int(time.time() / max_age))
return _wrapped
return _decorator
And its usage:
@time_cache(10)
def expensive(a: int):
"""An expensive function."""
time.sleep(1 + a)
print("Starting...")
expensive(1)
print("Again...")
expensive(1)
print("Done")
NB this uses time.time
and comes with all its caveats. You may want to use time.monotonic
instead if available/appropriate.