why is multiprocess Pool slower than a for loop?

喜你入骨 提交于 2019-12-18 04:25:06

问题


from multiprocessing import Pool

def op1(data):
    return [data[elem] + 1 for elem in range(len(data))]
data = [[elem for elem in range(20)] for elem in range(500000)]

import time

start_time = time.time()
re = []
for data_ in data:
    re.append(op1(data_))

print('--- %s seconds ---' % (time.time() - start_time))

start_time = time.time()
pool = Pool(processes=4)
data = pool.map(op1, data)

print('--- %s seconds ---' % (time.time() - start_time))

I get a much slower run time with pool than I get with for loop. But isn't pool supposed to be using 4 processors to do the computation in parallel?


回答1:


Short answer: Yes, the operations will usually be done on (a subset of) the available cores. But the communication overhead is large. In your example the workload is too small compared to the overhead.

In case you construct a pool, a number of workers will be constructed. If you then instruct to map given input. The following happens:

  1. the data will be split: every worker gets an approximately fair share;
  2. the data will be communicated to the workers;
  3. every worker will process their share of work;
  4. the result is communicated back to the process; and
  5. the main process groups the results together.

Now splitting, communicating and joining data are all processes that are carried out by the main process. These can not be parallelized. Since the operation is fast (O(n) with input size n), the overhead has the same time complexity.

So complexitywise even if you had millions of cores, it would not make much difference, because communicating the list is probably already more expensive than computing the results.

That's why you should parallelize computationally expensive tasks. Not straightforward tasks. The amount of processing should be large compared to the amount of communicating.

In your example, the work is trivial: you add 1 to all the elements. Serializing however is less trivial: you have to encode the lists you send to the worker.




回答2:


There are a couple of potential trouble spots with your code, but primarily it's too simple.

The multiprocessing module works by creating different processes, and communicating among them. For each process created, you have to pay the operating system's process startup cost, as well as the python startup cost. Those costs can be high, or low, but they're non-zero in any case.

Once you pay those startup costs, you then pool.map the worker function across all the processes. Which basically adds 1 to a few numbers. This is not a significant load, as your tests prove.

What's worse, you're using .map() which is implicitly ordered (compare with .imap_unordered()), so there's synchronization going on - leaving even less freedom for the various CPU cores to give you speed.

If there's a problem here, it's a "design of experiment" problem - you haven't created a sufficiently difficult problem for multiprocessing to be able to help you.




回答3:


As others have noted, the overhead that you pay to facilitate multiprocessing is more than the time-savings gained by parallelizing across multiple cores. In other words, your function op1() does not require enough CPU resources to see performance gain from parallelizing.

In the multiprocessing.Pool class, the majority of this overheard is spent serializing and deserializing data before the data is shuttled between the parent process (which creates the Pool) and the children "worker" processes.

This blog post explores, in greater detail, how expensive pickling (serializing) can be when using the multiprocessing.Pool module.



来源:https://stackoverflow.com/questions/45256953/why-is-multiprocess-pool-slower-than-a-for-loop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!