gpu

How do I get the NVIDIA core temperature in an integer value?

别说谁变了你拦得住时间么 提交于 2019-12-18 09:38:09
问题 I am taking a Arduino microcontroller class and I'm working on my final project: an automated computer cooling system that works according to case temperature. I was unable to get my NVIDIA GPU core temp using the following sources: this MSDN link or this NVIDIA link. How can I get the value of the temperature of my GPU? My knowledge in C# is basic and i couldn't make heads from tails on that manual or code examples in MSDN. 回答1: I'm gonna go ahead and answer my own question after a long time

Does GDI+ support graphics acceleration?

三世轮回 提交于 2019-12-18 09:33:12
问题 I'm trying to write a screensaver for a Windows platform using C++ and Win APIs. To render graphics I'm using GDI+, but the issue is rendering png's with some small amount of animation (fade-in and -out) becomes very "CPU heavy." So I was wondering if there's a way to enable GPU acceleration for GDI+ APIs? And if it's not possible, is there something that I can use from a non-managed code that supports GPU acceleration (apart from OpenGL or DirectX)? 回答1: Nope. GDI is mostly about

GPU is not used for calculations despite tensorflow-gpu installed

空扰寡人 提交于 2019-12-18 09:08:15
问题 My computer has the following software installed: Anaconda (3), TensorFlow (GPU), and Keras. There are two Anaconda virtual environments - one with TensorFlow for Python 2.7 and one for 3.5, both GPU version, installed according to the TF instructions. (I had a CPU version of TensorFlow installed previously in a separate environment, but I've deleted it.) When I run the following: source activate tensorflow-gpu-3.5 python code.py and check nvidia-smi it shows only 3MiB GPU Memory Usage by

Making CUB blockradixsort on-chip entirely?

◇◆丶佛笑我妖孽 提交于 2019-12-18 07:20:43
问题 I am reading the CUB documentations and examples: #include <cub/cub.cuh> // or equivalently <cub/block/block_radix_sort.cuh> __global__ void ExampleKernel(...) { // Specialize BlockRadixSort for 128 threads owning 4 integer items each typedef cub::BlockRadixSort<int, 128, 4> BlockRadixSort; // Allocate shared memory for BlockRadixSort __shared__ typename BlockRadixSort::TempStorage temp_storage; // Obtain a segment of consecutive items that are blocked across threads int thread_keys[4]; ... /

Making CUB blockradixsort on-chip entirely?

。_饼干妹妹 提交于 2019-12-18 07:20:03
问题 I am reading the CUB documentations and examples: #include <cub/cub.cuh> // or equivalently <cub/block/block_radix_sort.cuh> __global__ void ExampleKernel(...) { // Specialize BlockRadixSort for 128 threads owning 4 integer items each typedef cub::BlockRadixSort<int, 128, 4> BlockRadixSort; // Allocate shared memory for BlockRadixSort __shared__ typename BlockRadixSort::TempStorage temp_storage; // Obtain a segment of consecutive items that are blocked across threads int thread_keys[4]; ... /

TensorFlow in nvidia-docker: failed call to cuInit: CUDA_ERROR_UNKNOWN

假如想象 提交于 2019-12-18 06:31:26
问题 I have been working on getting an application that relies on TensorFlow to work as a docker container with nvidia-docker . I have compiled my application on top of the tensorflow/tensorflow:latest-gpu-py3 image. I run my docker container with the following command: sudo nvidia-docker run -d -p 9090:9090 -v /src/weights:/weights myname/myrepo:mylabel When looking at the logs through portainer I see the following: 2017-05-16 03:41:47.715682: W tensorflow/core/platform/cpu_feature_guard.cc:45]

How to check if pytorch is using the GPU?

纵饮孤独 提交于 2019-12-17 21:25:41
问题 I would like to know if pytorch is using my GPU. It's possible to detect with nvidia-smi if there is any activity from the GPU during the process, but I want something written in a python script. Is there a way to do so? 回答1: This is going to work : In [1]: import torch In [2]: torch.cuda.current_device() Out[2]: 0 In [3]: torch.cuda.device(0) Out[3]: <torch.cuda.device at 0x7efce0b03be0> In [4]: torch.cuda.device_count() Out[4]: 1 In [5]: torch.cuda.get_device_name(0) Out[5]: 'GeForce GTX

How to perform Hadamard product with CUBLAS on complex numbers?

拈花ヽ惹草 提交于 2019-12-17 21:23:58
问题 I need the compute the element wise multiplication of two vectors (Hadamard product) of complex numbers with NVidia CUBLAS. Unfortunately, there is no HAD operation in CUBLAS. Apparently, you can do this with the SBMV operation, but it is not implemented for complex numbers in CUBLAS. I cannot believe there is no way to achieve this with CUBLAS. Is there any other way to achieve that with CUBLAS, for complex numbers ? I cannot write my own kernel, I have to use CUBLAS (or another standard

Double precision floating point in CUDA

喜欢而已 提交于 2019-12-17 19:13:43
问题 Does CUDA support double precision floating point numbers? Also, what are the reasons for the same? 回答1: If your GPU has compute capability 1.3 then you can do double precision. You should be aware though that 1.3 hardware has only one double precision FP unit per MP, which has to be shared by all the threads on that MP, whereas there are 8 single precision FPUs, so each active thread has its own single precision FPU. In other words you may well see 8x worse performance with double precision

Is Opengl Development GPU Dependant?

泄露秘密 提交于 2019-12-17 18:59:27
问题 I am developing an android application in opengl ES2.0.In this Application I used to draw multiple lines and circles by touch event in GL surfaceView. As opengl depends on GPU, Currently it works fine in Google Nexus 7(ULP GeForce). In Samsung Galaxy Note 2(MALI 400MP) I'm trying to draw more than one line, but it clears the previous line and draw current line as new. In Sony Xperia Neo V(Adreno 205) I'm trying to draw a new line, it crashes the surface as shown in below image. Is it possible