Python multiprocessing: why are large chunksizes slower?

淺唱寂寞╮ 提交于 2019-12-18 12:34:23

问题


I've been profiling some code using Python's multiprocessing module (the 'job' function just squares the number).

data = range(100000000)
n=4
time1 = time.time()
processes = multiprocessing.Pool(processes=n)
results_list = processes.map(func=job, iterable=data, chunksize=10000)
processes.close()
time2 = time.time()
print(time2-time1)
print(results_list[0:10])

One thing I found odd is that the optimal chunksize appears to be around 10k elements - this took 16 seconds on my computer. If I increase the chunksize to 100k or 200k, then it slows to 20 seconds.

Could this difference be due to the amount of time required for pickling being longer for longer lists? A chunksize of 100 elements takes 62 seconds which I'm assuming is due to the extra time required to pass the chunks back and forth between different processes.


回答1:


About optimal chunksize:

  1. Having tons of small chunks would allow the 4 different workers to distribute the load more efficiently, thus smaller chunks would be desirable.
  2. In the other hand, context changes related to processes add an overhead everytime a new chunk has to be processed, so less amount of context changes and therefore less chunks are desirable.

As both rules want different aproaches, a point in the middle is the way to go, similar to a supply-demand chart.



来源:https://stackoverflow.com/questions/40799172/python-multiprocessing-why-are-large-chunksizes-slower

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!