gpu

Detecting good GPU on Android

久未见 提交于 2019-12-11 06:41:54
问题 I am looking for a safe way to detect whether the current GPU belongs to the current high end profile (such as Motorola's Atrix or Galaxy s2) so I can, in run-time, enable some more sophisticated visual effects in my game. Has anyone successfully done anything similar? I though about detecting dual-core CPU, which would usually come with a good GPU, but I don't have enough devices to test if it is going to work OK on most situations. 回答1: If those "more sophisticated visual effects" require

GPU parallel programming C/C++ [closed]

匆匆过客 提交于 2019-12-11 06:38:30
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I want to learn gpu parallel programming in C/C++. What Library and compiler should I use. If they are opensource, that would be nice. note: I have some practice with openmp and mpi. Though it is only for cpu parallel programming. 回答1: I depends on you GPU. OpenCL It is open source and works on Nvidia and AMD

Keras does not use GPU on Pycharm having python 3.5 and Tensorflow 1.4 [duplicate]

守給你的承諾、 提交于 2019-12-11 05:47:55
问题 This question already has answers here : Keras with TensorFlow backend not using GPU (3 answers) Closed 2 years ago . from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] xx= get_available_gpus() print('The GPU device is: ', xx) print('Tensorflow: ', tf.__version__)` This gives me the following output. Using TensorFlow backend. 2017-12-04 18:13:37

How can I tell if H2O 3.11.0.266 is running with GPUs?

半世苍凉 提交于 2019-12-11 05:37:59
问题 I've installed H2O 3.11.0.266 on a Ubuntu 16.04 with CUDA 8.0 and libcudnn.so.5.1.10 so I believe H2O should be able to find my GPUs. However, when I start up my h2o.init() in Python, I do not see evidence that it is actually using my GPUs. I see: H2O cluster total cores: 8 H2O cluster allowed cores: 8 which is the same as I had in the previous version (pre GPU). Also, http://127.0.0.1:54321/flow/index.html shows only 8 cores as well. I wonder if I don't have something properly installed or

Bfloat16 training in GPUs

独自空忆成欢 提交于 2019-12-11 05:18:37
问题 Hi I am trying to train a model using the new bfloat16 datatype variables. I know this is supported in Google TPUs. I was wondering if anyone has tried training using GPUs (for example, GTX 1080 Ti). Is that even possible, whether the GPU tensor cores are supportive? If anyone has any experience please share your thoughts. Many thanks! 回答1: I had posted this question in Tensorflow github community. Here is their response so far - " bfloat16 support isn't complete for GPUs, as it's not

Error loading library gpuarray with Theano

不打扰是莪最后的温柔 提交于 2019-12-11 05:18:18
问题 I am trying to run this script to test Theano's use of my GPU and get the following error: ERROR (theano.gpuarray): Could not initialize pygpu, support disabled Traceback (most recent call last): File "/home/me/anaconda3/envs/py35/lib/python3.5/site- packages/theano/gpuarray/__init__.py", line 164, in <module> use(config.device) File "/home/me/anaconda3/envs/py35/lib/python3.5/site- packages/theano/gpuarray/__init__.py", line 151, in use init_dev(device) File "/home/me/anaconda3/envs/py35/lib

Coding a CUDA Kernel that has many threads writing to the same index?

大憨熊 提交于 2019-12-11 05:17:06
问题 I'm writing some code for activating neural networks on CUDA, and I'm running into an issue. I'm not getting the correct summation of the weights going into a given neuron. So here is the kernel code, and I'll try to explain it a bit clearer with the variables. __global__ void kernelSumWeights(float* sumArray, float* weightArray, int2* sourceTargetArray, int cLength) { int nx = threadIdx.x + TILE_WIDTH*threadIdx.y; int index_in = (blockIdx.x + gridDim.x*blockIdx.y)*TILE_WIDTH*TILE_WIDTH + nx;

No Module Named '_pywrap_tensorflow_internal' (still without working solution)

不想你离开。 提交于 2019-12-11 05:15:49
问题 I have the same problem as in the similar question and tried the proposed solution, but it did not work. Below you can find the stacktrace. I am on Windows 10 x64 with Python 3.5.2 and GPU NVidia Geforce 1050. I also checked the TensorFlow site for common errors. C:\Users\Steph>ipython Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for

Installing GPU support for LightGBM on Google Collab

大兔子大兔子 提交于 2019-12-11 05:09:02
问题 Anyone got luck trying to install GPU support for lightgbm on Google Collab using the Notebooks there ? 回答1: Most of it was following the documentation provided here, with two small tweaks to make it work on Google Colab. Since the instances are renewed after 12 hours of usage, I post this at the beginning of my notebook to reinstall GPU support with lightgbm : !apt-get -qq install --no-install-recommends nvidia-375 !apt-get -qq install --no-install-recommends nvidia-opencl-icd-375 nvidia

Is there a way to access whether a portion of a webpage is being rendered on screen or not via Javascript?

别说谁变了你拦得住时间么 提交于 2019-12-11 05:07:57
问题 Currently our logic runs within an iframe on clients pages. We have the request to now detect whether that iFrame is currently in the viewing window or not (scrolled off-screen). I just had an idea that this might be something that the GPU might already have which would allow for it to provide better performance on the page with regards to what it is rendering and I wanted to know if anyone was aware of whether this data was accessible via an API/lib such as OpenGL. I am aware that providing