concurrent.futures

Python concurrent.futures: ProcessPoolExecutor fail to work

余生长醉 提交于 2020-01-16 08:39:09
问题 I'm trying to use the ProcessPoolExecutor method but it fails. Here is an example(calculation of the big divider of two numbers) of a failed use. I don't understand what the mistake is def gcd(pair): a, b = pair low = min(a, b) for i in range(low, 0, -1): if a % i == 0 and b % i == 0: return i numbers = [(1963309, 2265973), (2030677, 3814172), (1551645, 2229620), (2039045, 2020802)] start = time() pool = ProcessPoolExecutor(max_workers=2) results = list(pool.map(gcd, numbers)) end = time()

How to use queue with concurrent future ThreadPoolExecutor in python 3?

烂漫一生 提交于 2020-01-14 10:24:58
问题 I am using simple threading modules to do concurrent jobs. Now I would like to take advantages of concurrent futures modules. Can some put me a example of using a queue with concurrent library? I am getting TypeError: 'Queue' object is not iterable I dont know how to iterate queues code snippet: def run(item): self.__log.info(str(item)) return True <queue filled here> with concurrent.futures.ThreadPoolExecutor(max_workers = 100) as executor: furtureIteams = { executor.submit(run, item): item

How to use queue with concurrent future ThreadPoolExecutor in python 3?

穿精又带淫゛_ 提交于 2020-01-14 10:23:17
问题 I am using simple threading modules to do concurrent jobs. Now I would like to take advantages of concurrent futures modules. Can some put me a example of using a queue with concurrent library? I am getting TypeError: 'Queue' object is not iterable I dont know how to iterate queues code snippet: def run(item): self.__log.info(str(item)) return True <queue filled here> with concurrent.futures.ThreadPoolExecutor(max_workers = 100) as executor: furtureIteams = { executor.submit(run, item): item

How to print results of Python ThreadPoolExecutor.map immediately?

六月ゝ 毕业季﹏ 提交于 2020-01-13 19:28:26
问题 I am running a function for several sets of iterables, returning a list of all results as soon as all processes are finished. def fct(variable1, variable2): # do an operation that does not necessarily take the same amount of # time for different input variables and yields result1 and result2 return result1, result2 variables1 = [1,2,3,4] variables2 = [7,8,9,0] with ThreadPoolExecutor(max_workers = 8) as executor: future = executor.map(fct,variables1,variables2) print '[%s]' % ', '.join(map

How to print results of Python ThreadPoolExecutor.map immediately?

做~自己de王妃 提交于 2020-01-13 19:28:06
问题 I am running a function for several sets of iterables, returning a list of all results as soon as all processes are finished. def fct(variable1, variable2): # do an operation that does not necessarily take the same amount of # time for different input variables and yields result1 and result2 return result1, result2 variables1 = [1,2,3,4] variables2 = [7,8,9,0] with ThreadPoolExecutor(max_workers = 8) as executor: future = executor.map(fct,variables1,variables2) print '[%s]' % ', '.join(map

How can I cancel a hanging asyncronous task in tornado, with a timeout?

前提是你 提交于 2020-01-02 12:24:08
问题 My setup is python tornado server, which asynchronously processes tasks with a ThreadPoolExecutor . In some conditions, the task might turn into infinite loop. With the with_timeout decorator, I have managed to catch the timeout exception and return an error result to the client. The problem is that the task is still running in the background. How it is possible to stop the task from running in the ThreadPoolExecutor ? Or is it possible to cancel the Future ? Here is the code that reproduces

Copy flask request/app context to another process

三世轮回 提交于 2020-01-01 06:26:58
问题 tl;dr How can I serialise a Flask app or request context, or a subset of that context (i.e. whatever can be successfully serialised) so that I can access that context from another process, rather than a thread? Long version I have some functions that require access to the Flask request context, or the App context, that I want to run in the background. Flask has a built-in @copy_current_request_context decorator to wrap a function in a copy of the request context, so you can run it in a

multiprocessing queue full

ぃ、小莉子 提交于 2020-01-01 04:21:06
问题 I'm using concurrent.futures to implement multiprocessing. I am getting a queue.Full error, which is odd because I am only assigning 10 jobs. A_list = [np.random.rand(2000, 2000) for i in range(10)] with ProcessPoolExecutor() as pool: pool.map(np.linalg.svd, A_list) error: Exception in thread Thread-9: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/threading.py", line 921, in _bootstrap_inner self.run() File "/Library/Frameworks/Python

Individual timeouts for concurrent.futures

丶灬走出姿态 提交于 2019-12-30 08:04:52
问题 I see two ways to specify timeouts in concurrent.futures. as_completed() wait() Both methods handle N running futures. I would like to specify an individual timeout for each future. Use Case: Future for getting data from DB has a timeout of 0.5 secs. Future for getting data from a HTTP server has a timeout of 1.2 secs. How do I handle this with concurrent.futures ? Or is this library not the right tool? Conclusion AFAIK the solution by mdurant is a good work-around. I think I will use a

Individual timeouts for concurrent.futures

大城市里の小女人 提交于 2019-12-30 08:04:43
问题 I see two ways to specify timeouts in concurrent.futures. as_completed() wait() Both methods handle N running futures. I would like to specify an individual timeout for each future. Use Case: Future for getting data from DB has a timeout of 0.5 secs. Future for getting data from a HTTP server has a timeout of 1.2 secs. How do I handle this with concurrent.futures ? Or is this library not the right tool? Conclusion AFAIK the solution by mdurant is a good work-around. I think I will use a