gpu-programming

CUDA apps time out & fail after several seconds - how to work around this?

可紊 提交于 2019-11-26 19:47:28
I've noticed that CUDA applications tend to have a rough maximum run-time of 5-15 seconds before they will fail and exit out. I realize it's ideal to not have CUDA application run that long but assuming that it is the correct choice to use CUDA and due to the amount of sequential work per thread it must run that long, is there any way to extend this amount of time or to get around it? Die in Sente I'm not a CUDA expert, --- I've been developing with the AMD Stream SDK, which AFAIK is roughly comparable. You can disable the Windows watchdog timer, but that is highly not recommended , for

How is CUDA memory managed?

自古美人都是妖i 提交于 2019-11-26 17:26:56
When I run my CUDA program which allocates only a small amount of global memory (below 20 M), I got a "out of memory" error. (From other people's posts, I think the problem is related to memory fragmentation) I try to understand this problem, and realize I have a couple of questions related to CUDA memory management. Is there a virtual memory concept in CUDA? If only one kernel is allowed to run on CUDA simultaneously, after its termination, will all of the memory it used or allocated released? If not, when these memory got free released? If more than one kernel are allowed to run on CUDA, how

Setting up Visual Studio Intellisense for CUDA kernel calls

99封情书 提交于 2019-11-26 12:44:16
问题 I\'ve just started CUDA programming and it\'s going quite nicely, my GPUs are recognized and everything. I\'ve partially set up Intellisense in Visual Studio using this extremely helpful guide here: http://www.ademiller.com/blogs/tech/2010/10/visual-studio-2010-adding-intellisense-support-for-cuda-c/ and here: http://www.ademiller.com/blogs/tech/2011/05/visual-studio-2010-and-cuda-easier-with-rc2/ However, Intellisense still doesn\'t pick up on kernel calls like this: // KernelCall.cu

CUDA apps time out & fail after several seconds - how to work around this?

一世执手 提交于 2019-11-26 07:26:52
问题 I\'ve noticed that CUDA applications tend to have a rough maximum run-time of 5-15 seconds before they will fail and exit out. I realize it\'s ideal to not have CUDA application run that long but assuming that it is the correct choice to use CUDA and due to the amount of sequential work per thread it must run that long, is there any way to extend this amount of time or to get around it? 回答1: I'm not a CUDA expert, --- I've been developing with the AMD Stream SDK, which AFAIK is roughly

How is CUDA memory managed?

倖福魔咒の 提交于 2019-11-26 05:26:00
问题 When I run my CUDA program which allocates only a small amount of global memory (below 20 M), I got a \"out of memory\" error. (From other people\'s posts, I think the problem is related to memory fragmentation) I try to understand this problem, and realize I have a couple of questions related to CUDA memory management. Is there a virtual memory concept in CUDA? If only one kernel is allowed to run on CUDA simultaneously, after its termination, will all of the memory it used or allocated

Using Java with Nvidia GPU's (cuda)

女生的网名这么多〃 提交于 2019-11-26 02:03:29
问题 I\'m working on a business project that is done in java and needs huge computation power to compute business markets. Simple math but with huge amount of data. We ordered some cuda gpu\'s to try it with and since Java is not supported by cuda, Im wondering where to start. Should I build a JNI interface? Should I use JCUDA or is there other ways? I dont have experience in this field and I would like if someone could direct me to something so I can start researching and learning. 回答1: First of