Tried to write a process-based timeout (sync) on the cheap, like this:
from concurrent.futures import ProcessPoolExecutor
def call_with_timeout(func, *args,
The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.
You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:
def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)
The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.