nvidia

Opening a fullscreen OpenGL window

青春壹個敷衍的年華 提交于 2019-12-03 01:32:46
I am tring to open an OpenGL full screen window using GLFW on linux red-hat. I have a desktop that spans two monitors with total resolution of 3840*1080. I have two problems: 1. The window is opened just on one monitor with maximum window width of 1920 (the width of a single monitor). 2. The maximum height of the window is 1003 (which I think is the height of the screen minus the height of the task bar and the top bar). This is the code I use to open the window: if (glfwInit() == GL_FALSE) std::cout<< "Unable to initialize GLFW\n"; glfwOpenWindowHint(GLFW_STEREO, GL_FALSE); if (glfwOpenWindow

How do I select which GPU to run a job on?

会有一股神秘感。 提交于 2019-12-03 01:10:13
问题 In a multi-GPU computer, how do I designate which GPU a CUDA job should run on? As an example, when installing CUDA, I opted to install the NVIDIA_CUDA-<#.#>_Samples then ran several instances of the nbody simulation, but they all ran on one GPU 0; GPU 1 was completely idle (monitored using watch -n 1 nvidia-dmi ). Checking CUDA_VISIBLE_DEVICES using echo $CUDA_VISIBLE_DEVICES I found this was not set. I tried setting it using CUDA_VISIBLE_DEVICES=1 then running nbody again but it also went

Compile OpenCL on Mingw Nvidia SDK

六月ゝ 毕业季﹏ 提交于 2019-12-03 00:48:54
Is it possible to compile OpenCL using Mingw and Nvidia SDK? I'm aware that its not officially supported but that just doesn't make sense. Aren't the libraries provided as a statically linked libraries? I mean once compiled with whatever compiler that may be, and linked successfully, whats should be the problem? I managed to compile and successfully link my code to OpenCL libraries provided with Nvidia's SDK, however the executable throws Segmentation Fault at clGetPlatformIDs which is the first OpenCL call in my code. Here is my compilation command x86_64-w64-mingw32-g++ -std=c++11 File.cpp \

Difference between nVidia Quadro and Geforce cards? [closed]

半城伤御伤魂 提交于 2019-12-03 00:45:25
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . I'm not a 3D or HPC guy, but I've been tasked with doing some research into those fields for a possible HPC application. Reading benchmarks, comparisons and specs between nVidia Quadro and Geforce cards, it seems that for similar generation cards: Quadro is 2x-3x the price of Geforce hardware wise, the

Does GPL code linking with proprietary library depend which is created first? [closed]

喜你入骨 提交于 2019-12-03 00:25:15
Microsoft creates their windows and MFC DLL library, etc. An open source develop write a new MFC application and release the source code as GPL. The app has to link with the MS DLL/libraries to run in Windows, but I don't think anyone can argue that we now have the right to force the Microsoft's GPL their DLL. Does this mean the GPL license is really depends on which one is "created" first? If proprietary library is created first (such as Windows DLLs) that is published without linking and any GPL code and later a GPL program is linked with it, then the GPL program can't convert the

HOW TO: Import TensorFlow in Jupyter Notebook from Conda with GPU support?

让人想犯罪 __ 提交于 2019-12-02 23:51:40
I have installed tensorflow using the anaconda environment as mentioned in the tensorflow website and after doing my python installation path changed. dennis@dennis-HP:~$ which python /home/dennis/anaconda2/bin/python And Jupyter was installed. I assumed that if I was able to import and use tensorflow in the conda environment that I will be able to do the same in Jupyter. But that was not the case - Importing tensorflow in my system (without activating the environment) dennis@dennis-HP:~$ python Python 2.7.11 |Anaconda 4.1.0 (64-bit)| (default, Jun 15 2016, 15:21:30) [GCC 4.4.7 20120313 (Red

CUDA Runtime API error 38: no CUDA-capable device is detected

穿精又带淫゛_ 提交于 2019-12-02 23:06:13
The Situation I have a 2 gpu server (Ubuntu 12.04) where I switched a Tesla C1060 with a GTX 670. Than I installed CUDA 5.0 over the 4.2. Afterwards I compiled all examples execpt for simpleMPI without error. But when I run ./devicequery I get following error message: foo@bar-serv2:~/NVIDIA_CUDA-5.0_Samples/bin/linux/release$ ./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected What I have tried To solve this I tried all of the thinks recommended by CUDA-capable device , but

NVidia drivers not running on AWS after restarting the AMI

五迷三道 提交于 2019-12-02 22:56:51
everybody, I have the following problem: I started a P2 instance with this AMI . I installed some tools like screen, torch, etc. Then I successfully run some experiments using GPU and I created an image of the instance, so that I can terminate it and run it again later. Later I started a new instance from the AMI I created before. Everything looked fine - screen, torch, my experiments were present on the system, but I couldn't run the same experiments as before: NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed

why do we need cudaDeviceSynchronize(); in kernels with device-printf?

陌路散爱 提交于 2019-12-02 21:08:34
__global__ void helloCUDA(float f) { printf("Hello thread %d, f=%f\n", threadIdx.x, f); } int main() { helloCUDA<<<1, 5>>>(1.2345f); cudaDeviceSynchronize(); return 0; } Why is cudaDeviceSynchronize(); at many places for example here it is not required after kernel call? A kernel launch is asynchronous . This means it returns control to the CPU thread immediately after starting up the GPU process, before the kernel has finished executing. So what is the next thing in the CPU thread here? Application exit. At application exit, it's ability to send output to the standard output is terminated by

Why aren't there bank conflicts in global memory for Cuda/OpenCL?

天涯浪子 提交于 2019-12-02 17:11:23
One thing I haven't figured out and google isn't helping me, is why is it possible to have bank conflicts with shared memory, but not in global memory? Can there be bank conflicts with registers? UPDATE Wow I really appreciate the two answers from Tibbit and Grizzly. It seems that I can only give a green check mark to one answer though. I am newish to stack overflow. I guess I have to pick one answer as the best. Can I do something to say thank you to the answer I don't give a green check to? Short Answer: There are no bank conflicts in either global memory or in registers. Explanation: The