问题
This is probably simple and I'm just not finding a suitable question.
If I do a stand-alone script with multiprocessing.Pool
i know I'm supposed to do:
def foo(x):
return x**2
if __name__ == '__main__':
with Pool(n_jobs) as p:
p.map(foo, list_of_inputs)
But if I want to then make it an importable function, I assume __name__
will no longer be '__main__'
. Is it safe to just do:
def __foo(x):
return x**2
def bar(list_of_inputs, n_jobs):
with Pool(n_jobs) as p:
out = p.map(__foo, list_of_inputs)
return out
###################################################
from test.py import bar
baz = bar(mylist, 4)
I assume so, since bar
will only be called in the calling process and then only foo
will be called in the daughter processes, but I'm just wondering if this is a safe way to do this in general.
回答1:
I don't see any problem with your code that works fine for me in python3 and python2 (if I don't use the with
statement for the Pool). In my opinion, this is a safe way to use multiprocessing
and I don't understand why you experience this kind of infinite hanging. What version of Python are you using?
来源:https://stackoverflow.com/questions/57140777/importable-multiprocessing-pool-function