Speeding up numpy.dot

筅森魡賤 提交于 2020-01-22 07:31:31

问题


I've got a numpy script that spends about 50% of its runtime in the following code:

s = numpy.dot(v1, v1)

where

v1 = v[1:]

and v is a 4000-element 1D ndarray of float64 stored in contiguous memory (v.strides is (8,)).

Any suggestions for speeding this up?

edit This is on Intel hardware. Here is the output of my numpy.show_config():

atlas_threads_info:
    libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
    library_dirs = ['/usr/local/atlas-3.9.16/lib']
    language = f77
    include_dirs = ['/usr/local/atlas-3.9.16/include']

blas_opt_info:
    libraries = ['ptf77blas', 'ptcblas', 'atlas']
    library_dirs = ['/usr/local/atlas-3.9.16/lib']
    define_macros = [('ATLAS_INFO', '"\\"3.9.16\\""')]
    language = c
    include_dirs = ['/usr/local/atlas-3.9.16/include']

atlas_blas_threads_info:
    libraries = ['ptf77blas', 'ptcblas', 'atlas']
    library_dirs = ['/usr/local/atlas-3.9.16/lib']
    language = c
    include_dirs = ['/usr/local/atlas-3.9.16/include']

lapack_opt_info:
    libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
    library_dirs = ['/usr/local/atlas-3.9.16/lib']
    define_macros = [('ATLAS_INFO', '"\\"3.9.16\\""')]
    language = f77
    include_dirs = ['/usr/local/atlas-3.9.16/include']

lapack_mkl_info:
  NOT AVAILABLE

blas_mkl_info:
  NOT AVAILABLE

mkl_info:
  NOT AVAILABLE

回答1:


Your arrays are not very big, so ATLAS probably isn't doing much. What are your timings for the following Fortran program? Assuming ATLAS isn't doing much, this should give you a sense of how fast dot() could be if there was not any python overhead. With gfortran -O3 I get speeds of 5 +/- 0.5 us.

    program test

    real*8 :: x(4000), start, finish, s
    integer :: i, j
    integer,parameter :: jmax = 100000

    x(:) = 4.65
    s = 0.
    call cpu_time(start)
    do j=1,jmax
        s = s + dot_product(x, x)
    enddo
    call cpu_time(finish)
    print *, (finish-start)/jmax * 1.e6, s

    end program test



回答2:


Perhaps the culprit is copying of the arrays passed to dot.

As Sven said, the dot product relies on BLAS operations. These operations require arrays stored in contiguous C order. If both arrays passed to dot are in C_CONTIGUOUS, you ought to see better performance.

Of course, if your two arrays passed to dot are indeed 1D (8,) then you should see both the C_CONTIGUOUS AND F_CONTIGUOUS flags set to True; but if they are (1, 8), then you can see mixed order.

>>> w = NP.random.randint(0, 10, 100).reshape(100, 1)
>>> w.flags
   C_CONTIGUOUS : True
   F_CONTIGUOUS : False
   OWNDATA : False
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False


An alternative: use _GEMM from BLAS, which is exposed through the module, scipy.linalg.fblas. (The two arrays, A and B, are obviously in Fortran order because fblas is used.)

from scipy.linalg import fblas as FB
X = FB.dgemm(alpha=1., a=A, b=B, trans_b=True)



回答3:


Only thing I can think of to accelerate this is to make sure your NumPy installation is compiled against an optimized BLAS library (like ATLAS). numpy.dot() is one of only a few NumPy functions that make use of BLAS.




回答4:


numpy.dot will use multithreading if compiled correctly. Make sure that it does with top. I know of cases where people didn't get multithreading in numpy w/ atlas to work. Furthermore, it's worth trying to use a numpy version that is compiled against the intel mkl libraries. They include blas routines that are supposed to be faster than atlas on intel hardware. You could give enthought's python distro a try. Contains all this and is free for people with an edu email account.



来源:https://stackoverflow.com/questions/5990577/speeding-up-numpy-dot

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!