SSE2 double multiplication slower than with standard multiplication

感情迁移 提交于 2021-01-29 09:42:28

问题


I'm wondering why the following code with SSE2 instructions performs the multiplication slower than the standard C++ implementation. Here is the code:

        m_win = (double*)_aligned_malloc(size*sizeof(double), 16);
        __m128d* pData = (__m128d*)input().data;
        __m128d* pWin = (__m128d*)m_win;
        __m128d* pOut = (__m128d*)m_output.data;
        __m128d tmp;
        int i=0;
        for(; i<m_size/2;i++)
            pOut[i] = _mm_mul_pd(pData[i], pWin[i]);

The memory for m_output.data and input().data has been allocated with _aligned_malloc.

The time to execute this code however for a 2^25 array is identical to the time for this code (350ms):

for(int i=0;i<m_size;i++)
    m_output.data[i] = input().data[i] * m_win[i];

How is that possible? It should theoretically take only 50% of the time, right? Or is the overhead for the memory transfer from SIMD registers to the m_output.data array so expensive?

If I replace the line from the first snippet

pOut[i] = _mm_mul_pd(pData[i], pWin[i]);

by

tmp = _mm_mul_pd(pData[i], pWin[i]);

where __m128d tmp; then the codes executes blazingly fast, less then the resolution of my timer function. Is that because everything is just stored in the registers and not the memory?

And even more surprising, if I compile in debug mode, the SSE code takes only 93ms while the standard multiplication takes 309ms.

  • DEBUG: 93ms (SSE2) / 309ms (standard multiplication)
  • RELEASE: 350ms (SSE2) / 350 (standard multiplication)

What's going on here???

I'm using MSVC2008 with QtCreator 2.2.1 in release mode. Here are my compilter switches for RELEASE:

cl -c -nologo -Zm200 -Zc:wchar_t- -O2 -MD -GR -EHsc -W3 -w34100 -w34189

and these are for DEBUG:

cl -c -nologo -Zm200 -Zc:wchar_t- -Zi -MDd -GR -EHsc -W3 -w34100 -w34189

EDIT Regarding the RELEASE vs DEBUG issue: I just wanted to note that I profiled the code and the SSE code is infact slower in release mode! That just confirms somehow the hypothesis that VS2008 somehow cant handle intrinsics with the optimizer properly. Intel VTune gives me 289ms for the SSE loop in DEBUG and 504ms in RELEASE mode. Wow... just wow...


回答1:


First of all, VS 2008 is a bad choice for intrisincs as it tends to add many more register moves than necessary and in general does not optimize very well (for instance, it has issues with loop induction variable analysis when SSE instructions are present.)

So, my wild guess is that the compiler generates mulss instructions which the CPU can trivially reorder and execute in parallel (no dependencies between the iterations) while the intrisincs result in lots of register moves/complex SSE code -- it might even blow the trace cache on modern CPUs. VS2008 is notorious for doing all it's calculations in registers and I guess there will be some hazards that the CPU cannot skip (like xor reg, move mem->reg, xor, mov mem->reg, mul, mov mem->reg which is a dependency chain while the scalar code might be move mem->reg, mul with mem operand, mov.) You should definitely look at the generated assembly or try VS 2010 which has much better support for intrinsincs.

Finally, and most important: Your code is not compute bound at all, no amount of SSE will make it significantly faster. On each iteration, you are reading four double values and writing two, which means FLOPs is not your problem. In that case, you're at the mercy of the cache/memory subsystem, and that probably explains the variance you see. The debug multiplication shouldn't be faster than release; and if you see it being faster than you should do more runs and check what else is going on (be careful if your CPU supports a turbo mode, that adds another 20% variation.) A context switch which empties the cache might be enough in this case.

So, overall, the test you made is pretty much meaningless and just shows that for memory bound cases there is no difference to use SSE or not. You should use SSE if there is actually code which is compute-dense and parallel, and even then I would spend a lot of time with a profiler to nail down the exact location where to optimize. A simple dot product is not suitable to see any performance improvements with SSE.




回答2:


Several points:

  • as has already been pointed out, MSVC generates pretty bad code for SSE
  • your code is almost certainly memory bandwidth limited, since you are performing only one operation in between loads and stores
  • most modern x86 CPUs have two floating point ALUs, so there may be little to be gained from using SSE for double precision floating point math, even if you're not bandwidth-limited


来源:https://stackoverflow.com/questions/6565040/sse2-double-multiplication-slower-than-with-standard-multiplication

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!