I have multiple threads running the same process that need to be able to to notify each other that something should not be worked on for the next n seconds its not the end o
I know this is a little old, but for those who are interested in no third-party dependencies, this is a minor wrapper around the builtin functools.lru_cache (I noticed Javier's similar answer after writing this, but figured I post it anyway since this doesn't require Django):
import functools
import time
def time_cache(max_age, maxsize=128, typed=False):
"""Least-recently-used cache decorator with time-based cache invalidation.
Args:
max_age: Time to live for cached results (in seconds).
maxsize: Maximum cache size (see `functools.lru_cache`).
typed: Cache on distinct input types (see `functools.lru_cache`).
"""
def _decorator(fn):
@functools.lru_cache(maxsize=maxsize, typed=typed)
def _new(*args, __time_salt, **kwargs):
return fn(*args, **kwargs)
@functools.wraps(fn)
def _wrapped(*args, **kwargs):
return _new(*args, **kwargs, __time_salt=int(time.time() / max_age))
return _wrapped
return _decorator
And its usage:
@time_cache(10)
def expensive(a: int):
"""An expensive function."""
time.sleep(1 + a)
print("Starting...")
expensive(1)
print("Again...")
expensive(1)
print("Done")
NB this uses time.time and comes with all its caveats. You may want to use time.monotonic instead if available/appropriate.