does eigen have self transpose multiply optimization like H.transpose()*H

末鹿安然 提交于 2019-12-12 04:28:09

问题


I have browsed the tutorial of eigen at https://eigen.tuxfamily.org/dox-devel/group__TutorialMatrixArithmetic.html

it said "Note: for BLAS users worried about performance, expressions such as c.noalias() -= 2 * a.adjoint() * b; are fully optimized and trigger a single gemm-like function call."

but how about computation like H.transpose() * H , because it's result is a symmetric matrix so it should only need half time as normal A*B, but in my test, H.transpose() * H spend same time as H.transpose() * B. does eigen have special optimization for this situation , like opencv, it has similar function.

I know symmetric optimization will break the vectorization , I just want to know if eigen have solution which could provide both symmetric optimization and vectorization


回答1:


You are right, you need to tell Eigen that the result is symmetric this way:

MatrixXd H = MatrixXd::Random(m,n);
MatrixXd Z = MatrixXd::Zero(n,n);
Z.sefladjointView<Lower>().rankUpdate(H.transpose());

The last line computes Z += H * H^T within the lower triangular part. The upper part is left unchanged. You want a full matrix, then copy the lower part to the upper one:

 Z.triangularView<Upper>() = Z.transpose();

This rankUpdate routine is fully vectorized and comparable to the BLAS equivalent. For small matrices, better perform the full product.

See also the respective doc.



来源:https://stackoverflow.com/questions/39606224/does-eigen-have-self-transpose-multiply-optimization-like-h-transposeh

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!