Integer calculations on GPU

杀马特。学长 韩版系。学妹 提交于 2019-12-18 04:33:28

问题


For my work it's particularly interesting to do integer calculations, which obviously are not what GPUs were made for. My question is: Do modern GPUs support efficient integer operations? I realize this should be easy to figure out for myself, but I find conflicting answers (for example yes vs no), so I thought it best to ask.

Also, are there any libraries/techniques for arbitrary precision integers on GPUs?


回答1:


First, you need to consider the hardware you're using: GPU devices performance widely differs from a constructor to another.
Second, it also depends on the operations considered: for example adds might be faster than multiplies.

In my case, I'm only using NVIDIA devices. For this kind of hardware: the official documentation announces equivalent performance for both 32-bit integers and 32-bit single precision floats with the new architecture (Fermi). Previous architecture (Tesla) used to offer equivalent performance for 32-bit integers and floats but only when considering adds and logical operations.

But once again, this may not be true depending on the device and instructions you use.



来源:https://stackoverflow.com/questions/4364879/integer-calculations-on-gpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!