Can I avoid Python loop overhead on dynamic programming with numpy?

邮差的信 提交于 2019-12-06 11:29:21

you won't get any speed boost from numpy if you use python loops in your algorithm. You need to parallelize your problem.

In image processing, parallelization means using the same function on all pixels, i.e using kernels. In numpy, instead of doing:

for x in range(xsize):
    for y in range(ysize):
         img1[y, x] = img2[y, x] + img3[y, x]

you do:

img1 = img2 + img3 # add 2 images pixelwise

so that the loop happens in c. The fact that you have a list of neighbors with unknown length for each pixel make your problem difficult to parallelize in this way. You should either rework your problem (could you be a bit more specific about you algorithm?), or use another language, like cython.

edit:

you won't get benefit from Numpy without changing your algorithm. Numpy allows you to perform linear algebra operations. You can't avoid looping overhead with this library when performing arbitrary operations.

To optimize this, you may consider:

  • switching to another language like cython (which is specialized in python extensions) to get rid of looping costs

  • optimizing your algorithm: If you can get the same result using only linear algebra operations (this depend on the neighborsThatNeedCalculation function), you may use numpy, but you will need to work out a new architecture.

  • using parallelization techniques like MapReduce. With python you may use a pool of workers (available in the multiprocessing module), you will get more speed gains if you switch to another language, since python will have other bottlenecks.

In the case you want something easy to setup and to integrate, and you just need to have c-like performances, I strongly suggest cython if you can't rework your algorithm.

For the first part, you can use numpy.vectorize, but should only do so if there's no way to use array operations to implement the functionality of updateDistance. Here's an example:

import numpy as np    
updateDistance = np.vectorize(lambda x: x + 1) # my updateDistance increments

In reality, if this is the operation you are trying to do, just do a + 1. So if we take an array of ones and apply updateDistance:

>>> a = np.ones((3,3))
>>> updateDistance(a)
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.],
       [ 2.,  2.,  2.]])

As for the second part, I do not think I understand the details well enough to suggest a better alternative. It sounds like you need to look at the nearest neighbors repeatedly, so I suspect you can improve things in the if-else, at least.


Update: Timings for the first part.

Note: these timings were done on my machine without attempting to normalize the environment.

Loop times are generated with:

python -mtimeit 'import numpy as np' 'n = 100' 'a = np.ones((n, n))' 'b = np.zeros((n, n))' 'for x in range(n): ' '    for y in range(n):' '        b[x,y] = a[x,y] + 1'

The np.vectorize times are generated with:

python -mtimeit 'import numpy as np' 'n = 100' 'a = np.ones((n, n))' 'updateDistance = np.vectorize(lambda x: x + 1)' 'b = updateDistance(a)'

In both cases, n = 100 leads to a 100 x 100 array. Replace 100 as needed.

Array size    Loop version    np.vectorize version    np.vectorize speed up
100 x 100     20.2 msec        2.6 msec               7.77x
200 x 200     81.8 msec       10.4 msec               7.87x
400 x 400     325 msec        42.6 msec               7.63x

Finally, to compare the np.vectorize example with simply using array operations, you can do:

python -mtimeit 'import numpy as np' 'n = 100' 'a = np.ones((n, n))' 'a += 1'

On my machine, this generated the following.

Array size    Array operation version    Speed up over np.vectorize version
100 x 100     23.6 usec                  110.2x
200 x 200     79.7 usec                  130.5x
400 x 400     286 usec                   149.0x

In summary, there an advantage in using np.vectorize, instead of loops, but there is a much bigger incentive to implement the functionality of updateDistance using array operations, if possible.

You should consider using C-extension/Cython. If you stay with Python, one major improvement can be achieved by replacing:

for xn,yn in neighbors(x,y):
      A[x,y] += 1+A[xn,yn]

with:

n = neighbors(x,y)
A[x,y] += len(n)+sum(A[n])

neighbors should return indexes, not subscripts.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!