nvidia

nvcc fatal : Cannot find compiler 'cl.exe' in PATH although Visual Studio 12.0 is added to PATH

末鹿安然 提交于 2019-12-05 05:37:24
I have followed all the instructions from https://datanoord.com/2016/02/01/setup-a-deep-learning-environment-on-windows-theano-keras-with-gpu-enabled/ but can't seem to get it work. I have added C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin to my PATH variable Every time I run the code from the Theano website to test whether a CPU or GPU is used, it gives me a fatal error of "nvcc fatal : Cannot find compiler 'cl.exe' in PATH" Here is the code I use to test: from theano import function, config, shared, sandbox import theano.tensor as T import numpy import time vlen = 10 * 30 * 768

Is it unsafe to run multiple tensorflow processes on the same GPU?

只谈情不闲聊 提交于 2019-12-05 05:37:19
I only have one GPU (Titan X Pascal, 12 GB VRAM) and I would like to train multiple models, in parallel, on the same GPU. I tried encapsulated my model in a single python program (called model.py), and I included code in model.py to restrict VRAM usage (based on this example ). I was able to run up to 3 instances of model.py concurrently on my GPU (with each instance taking a little less than 33% of my VRAM). Mysteriously, when I tried with 4 models I received an error: 2017-09-10 13:27:43.714908: E tensorflow/stream_executor/cuda/cuda_dnn.cc:371] coul d not create cudnn handle: CUDNN_STATUS

如何查看Nvidia的GPU运行状态

。_饼干妹妹 提交于 2019-12-05 05:21:34
如何查看Nvidia的GPU运行状态   在使用nvidia的GPU进行运算的时候,通常会有需要了解GPU运行状态需求。在下面的文章中,将会介绍我在实际使用中到的方法。    1.使用" nvidia-smi "查看GPU状态信息   通常安装好nvidia驱动程序后会自动安装好nvidia-smi,使用nvidia-smi是最简单直接的查看GPU状态信息的方法。 直接运行"nvidia-smi",就可以查看当前的运行状态。可以查看到GPU的各个核心的工作状态、温度、内存信息、及使用GPU的进程信息等。 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 367.27 Driver Version: 367.27 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |======

How can I use 100% of VRAM on a secondary GPU from a single process on windows 10?

允我心安 提交于 2019-12-05 01:16:11
This is on windows 10 computer with no monitor attached to the Nvidia card. I've included output from nvida-smi showing > 5.04G was available. Here is the tensorflow code asking it to allocate just slightly more than I had seen previously: (I want this to be as close as possible to memory fraction=1.0) config = tf.ConfigProto() #config.gpu_options.allow_growth=True config.gpu_options.per_process_gpu_memory_fraction=0.84 config.log_device_placement=True sess = tf.Session(config=config) Just before running the above line in a jupyter notebook I ran nvida-smi: +-----------------------------------

Forcing hardware accelerated rendering

时光毁灭记忆、已成空白 提交于 2019-12-05 00:11:30
I have an OpenGL library written in c++ that is used from a C# application using C++/CLI adapters. My problem is that if the application is used on laptops with Nvidia Optimus technology the application will not use the hardware acceleration and fail. I have tried to use the info found in Nvidias document http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf about linking libs to my C++-dll and exporting NvOptimusEnablement from my OpenGL-library but that fails. I guess I have to do something with the .exe not with the .dlls linked to the .exe

C# Performance Counter Help, Nvidia GPU

怎甘沉沦 提交于 2019-12-04 23:57:09
问题 So I've been experimenting with the performance counter class in C# and have had great success probing the CPU counters and almost everything I can find in the windows performance monitor. HOWEVER, I cannot gain access to the "NVIDIA GPU" category... So for example, the following line of code is how it usually works. PerformanceCounter cpuCounter = new PerformanceCounter("Processor", "% Processor Time", "_Total"); That code works fine, but the GPU category that appeared in the performance

Support for Nvidia CUDA Toolkit 9.2

 ̄綄美尐妖づ 提交于 2019-12-04 23:16:14
问题 What is the reasoning that Tensorflow-gpu is bound to a specific version of Nvidia's CUDA Toolkit? The current version appears to look for 9.0 specifically and will not work with anything greater. For example I installed the latest Toolkit 9.2 and added it to path but Tensorflow-gpu will not work with it and complains that it is looking for 9.0. I can see major version updates not being supported but a minor release? 回答1: That's a good question. According to NVidia's website, The CUDA driver

NVidia CUDA toolkit 7.5.27 failing to install on OS X

江枫思渺然 提交于 2019-12-04 22:35:28
Downloading the CUDA toolkit DMG works, but the installer fails with a cryptic "package manifest parsing error" error after selecting packages. Running the installer from the command line using the binary inside fails in a similar manner. The log file at /var/log/cuda_installer.log basically says the same: Apr 28 18:16:10 CUDAMacOSXInstaller[58493] : Awoken from nib! Apr 28 18:16:10 CUDAMacOSXInstaller[58493] : Switched to local mode. Apr 28 18:16:24 CUDAMacOSXInstaller[58493] : Package manifest parsing error! Apr 28 18:16:24 CUDAMacOSXInstaller[58493] : Package manifest parsing error! Apr 28

OpenGL rendering in Windows XP with multiple video cards

依然范特西╮ 提交于 2019-12-04 20:37:01
问题 I'm developing an OpenGL application for Windows XP. The target machine has 2 NVIDIA GeForce 9800GT video cards, which are needed because the application needs to have output 2 streams of analog video. The application itself has two OpenGL windows, one for each video card. Each video card is connected to one monitor. As for the code, it's based on a minimal OpenGL example. How can I know if the application is utilizing both video cards for rendering? At the moment, I don't care if the

Video decoder on Cuda ffmpeg

有些话、适合烂在心里 提交于 2019-12-04 19:52:08
I starting to implement custum video decoder that utilize cuda HW decoder to generate YUV frame for next to encode it. How can I fill "CUVIDPICPARAMS" struc ??? Is it possible? My algorithm are: For get video stream packet I'm use ffmpeg-dev libs avcodec, avformat... My steps: 1) Open input file: avformat_open_input(&ff_formatContext,in_filename,nullptr,nullptr); 2) Get video stream property's: avformat_find_stream_info(ff_formatContext,nullptr); 3) Get video stream: ff_video_stream=ff_formatContext->streams[i]; 4) Get CUDA device and init it: cuDeviceGet(&cu_device,0); CUcontext cu_vid_ctx; 5