问题
I am currently trying to speed up my large sparse (scipy) matrix multiplications. I have successfully linked my numpy installation with OpenBLAS and henceforth, also scipy. I have run these tests with success.
When I use numpy.dot(X,Y)
I can clearly see performance boosts and also that multiple cores are used simultaneously. However, when I use scipy's dot functionality, no such performance boosts can be seen and still one one core is used. For example:
x = scipy.sparse.csr_matrix(numpy.random.random((1000,1000)))
x.dot(x.T)
Does anyone know how I can make BLAS also work with scipy's dot functionality?
回答1:
BLAS is just used for dense floating-point matrices. Matrix multiplication of a scipy.sparse.csr_matrix
is done using pure C++ functions that don't make any calls to external BLAS libraries.
For example, matrix-matrix multiplication is implemented here, in csr_matmat_pass_1
and csr_matmat_pass_2
.
Optimised BLAS libraries are highly tuned to make efficient use of CPU caches by decomposing the dense input matrices into smaller block matrices in order to achieve better locality-of-reference. My understanding is that this strategy can't be easily applied to sparse matrices, where the non-zero elements may be arbitrarily distributed within the matrix.
来源:https://stackoverflow.com/questions/25098653/is-it-possible-to-use-blas-to-speed-up-sparse-matrix-multiplication