Any concurrent.futures timeout that actually works?

女生的网名这么多〃 提交于 2019-12-10 15:58:52

问题


Tried to write a process-based timeout (sync) on the cheap, like this:

from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
    with ProcessPoolExecutor(max_workers=1) as pool:
        future = pool.submit(func, *args)
        result = future.result(timeout=timeout)

But it seems the timeout argument passed to future.result doesn't really work as advertised.

>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213

OK.

>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError

Not OK - unblocked after 5 seconds, not 3 seconds.

Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.


回答1:


You might want to take a look at pebble.

Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.

When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.

Timeout:

pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
    future.result()
except TimeoutError:
    print("Timeout")

Example:

def call_with_timeout(func, *args, timeout=3):
    pool = pebble.ProcessPool(max_workers=1)
    with pool:
        future = pool.schedule(func, args=args, timeout=timeout)
        return future.result()



回答2:


The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.

You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:

def call_with_timeout(func, *args, timeout=3):
    pool = ProcessPoolExecutor(max_workers=1)
    try:
        future = pool.submit(func, *args)
        result = future.result(timeout=timeout)
    finally:
        pool.shutdown(wait=False)

The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.



来源:https://stackoverflow.com/questions/54011552/any-concurrent-futures-timeout-that-actually-works

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!