gpu

Three.js Shader-Texture Blinking

怎甘沉沦 提交于 2019-12-10 17:23:38
问题 I'm new to three.js and think it's great. I'm trying to get a better hold on moving texture data to the shader so I can mostly use GPU. I base my program on Mr. Doob's magic dust example, but I'm not using particles rather loaded models stored in a texture. I'm having an issue currently where I get flickering. The code below is a rough example of the flickering and has some closeness to what I'm doing. If anyone could help me understand what I'm doing wrong or where the flickering is coming

Does TensorFlow use all of the hardware on the GPU?

最后都变了- 提交于 2019-12-10 17:13:53
问题 The NVidia GP100 has 30 TPC circuits and 240 "texture units". Do the TPCs and texture units get used by TensorFlow, or are these disposable bits of silicon for machine learning? I am looking at GPU-Z and Windows 10's built-in GPU performance monitor on a running neural net training session and I see various hardware functions are underutilized. Tensorflow uses CUDA. CUDA has access, I presume, to all hardware components. If I know where the gap is (between Tensorflow and underlying CUDA) and

How to get the ID of GPU allocated to a SLURM job on a multiple GPUs node?

妖精的绣舞 提交于 2019-12-10 16:56:16
问题 When I submit a SLURM job with the option --gres=gpu:1 to a node with two GPUs, how can I get the ID of the GPU which is allocated for the job? Is there an environment variable for this purpose? The GPUs I'm using are all nvidia GPUs. Thanks. 回答1: You can get the GPU id with the environment variable CUDA_VISIBLE_DEVICES . This variable is a comma separated list of the GPU ids assigned to the job. 回答2: Slurm stores this information in an environment variable, SLURM_JOB_GPUS . One way to keep

PyTorch CUDA vs Numpy for arithmetic operations? Fastest?

a 夏天 提交于 2019-12-10 16:46:20
问题 I performed element-wise multiplication using Torch with GPU support and Numpy using the functions below and found that Numpy loops faster than Torch which shouldn't be the case, I doubt. I want to know how to perform general arithmetic operations with Torch using GPU. Note: I ran these code snippets in Google Colab notebook Define the default tensor type to enable global GPU flag torch.set_default_tensor_type(torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor)

Get GPU info on Android without SurfaceView

本秂侑毒 提交于 2019-12-10 16:07:46
问题 On Android, is there a way to get GPU information without creating a SurfaceView? I'm not looking to draw anything using OpenGL, but I just need to get hardware information like vendor, OpenGL ES version, extensions available etc. 回答1: I am sorry I am not sure how to do that with Android but the function glGetString allows you to access the OpenGL information. Here is a sample C++ style code that will output the extensions supported by your hardware that I hope you'll be able to adapt to

TensorFlow 1.0 does not see GPU on Windows (but Theano does)

不想你离开。 提交于 2019-12-10 14:56:30
问题 I have a running installation of Keras & Theano on Windows (by following this tutorial). Now I've tried to switch the backend to Tensorflow which worked quite fine. The only issue I have, is that Tensorflow does not detect my GPU, which Theano in contrast does: from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] yields no results but when running

Determinant calculation with CUDA [closed]

谁说我不能喝 提交于 2019-12-10 14:38:53
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . Is there any library or freely available code which will calculate the determinant of a small ( 6x6 ), double precision matrix entirely on a GPU? 回答1: Here is the plan, you will need to buffer 100s of these tiny matrices and launch the kernel once to compute the determinant for all of them at once. I am not

How can I read the GPU load?

半腔热情 提交于 2019-12-10 14:37:50
问题 I am writing a program that monitors various resources of the computer, such as CPU usage and so on. I want to monitor GPU usage (the GPU load, not temperature) as well. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Diagnostics; using DannyGeneral; namespace CpuUsage { public partial class Form1 : Form { private bool processEnded; private

Malloc Memory corruption in C

南笙酒味 提交于 2019-12-10 14:23:11
问题 I have a problem using malloc. I have a function called jacobi_gpu wich is called many times : int main(int argc, char* argv[]){ /* ... */ int totalrot=0; while(nrot>0){ iter++; nrot=jacobi_gpu(a,q, tol, dimmat); totalrot+=nrot; printf("iter =%3d nrot=%3d\n",iter, nrot); } /* ... */ } The parameters a,q,tol and dimmat are correctly initialized. A and Q are 2 square matrices and dimmat is their dimension. Here is my code : int jacobi_gpu(double A[], double Q[], double tol, long int dim){ int

Using GPU to speed up BigInteger calculations

耗尽温柔 提交于 2019-12-10 12:43:43
问题 I am almost done with an algorithm that processes some very large integers (around the order of 2 raised to the power 100,000,000). This takes a couple of hours of highly-parallel code on a 16 core server with more than adequate memory since the algorithm is not memory intensive. I make use of the BigInteger class in .NET 4. The specifics of the algorithm are not important but for context, following is a pretty exhaustive list of operations performed on these integers and some salient