Performance of integer and bitwise operations on GPU

萝らか妹 提交于 2019-11-30 22:06:44

问题


Though GPUs are supposed for use with floating point data types, I'd be interested in how fast can GPU process bitwise operations. These are the fastest possible on CPU, but does GPU emulate bitwise operations or are they fully computed on hardware? I'm planning to use them inside shader programs written with GLSL. Also I'd suppose that if bitwise operations have full preformance, integer data types should have also, but I need confirmation on that.

To be more precise, targeted versions are OpenGL 3.2 and GLSL 1.5. Hardware that should run this is any Radeon HD graphics card and GeForce series 8 and newer.. If there are some major changes in newer versions of OpenGL and GLSL related to processing speeds of bitwise operations/integers, I'd be glad if you'll point them out.


回答1:


This question was partially answered Integer calculations on GPU

In short modern GPUs have equivalent INT and FP performance for 32bit data. So your logical operations will run at the same speed.

From a programming perspective you will lose performance if you are dealing with SCALAR integer data. GPUs like working with PARALLEL and PACKED operations.

for(int i=0; i<LEN_VEC4; i++)
    VEC4[i] = VEC4[i] * VEC4[i]; // (x,y,z,w) * (x,y,z,w)

If you're doing something like...

for(int i=0; i<LEN_VEC4; i++)
    VEC4[i].w = (VEC4[i].x & 0xF0F0F0F0) | (VEC4[i].z ^ 0x0F0F0F0F) ^ VEC4[i].w;

...doing many different operations on elements of the same vector you will run into performance problems.



来源:https://stackoverflow.com/questions/8683720/performance-of-integer-and-bitwise-operations-on-gpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!