Bad GPU performance when compiling with -G parameter with nvcc compiler

你。 提交于 2019-12-02 16:23:06

问题


I am doing some tests and I realized that using the -G parameter when compiling is giving me a bad performance than without it.

I have checked the documentation in Nvidia:

--device-debug (-G)                         
    Generate debug information for device code. 

But it is not helping me to know the reason why is giving me such bad performance. Where is it generating this debug information and when? and what could be the cause of this bad performance?


回答1:


Using the -G switch disables most compiler optimizations that nvcc might do in device code. The resulting code will often run slower than code that is not compiled with -G, for this reason.

This is pretty easy to see by running your executable in each case through cuobjdump -sass myexecutable and looking at the generated device code. You'll see generally less device code in the non -G case, and you can see the differences in specific optimizations as well.

One of the reasons for this is that highly optimized device code may eliminate actual lines of source code and actual source code variables. This can make it very difficult to debug code. Therefore to enable debugging, most optimizations are disabled with -G.

Also note that with Thrust, using the -G switch may result in unpredictable behavior. Newer versions of thrust should behave better, but there may still be unexpected issues when compiling thrust code with -G.



来源:https://stackoverflow.com/questions/23596240/bad-gpu-performance-when-compiling-with-g-parameter-with-nvcc-compiler

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!