gpu

Mesa 17.0.1 says OpenGL Core 4.5 even though my Intel HD 520 Graphics Card supports only 4.4

喜你入骨 提交于 2019-12-12 02:18:42
问题 When I query some OpenGL info then I get the following statements: Vendor: Intel Open Source Technology Center WindowManager: Mesa DRI Intel(R) HD Graphics 520 (Skylake GT2) OpenGL version: 4.5 (Core Profile) Mesa 17.0.1 GLSL version: 4.50 But my laptop CPU/GPU Intel 6200U with Intel HD 520 (Ubuntu 17.04) supports according to Intel Product Specification only OpenGL 4.4 . Can anybody say something about this? Is the OpenGL query wrong? Thanks 回答1: If you are using the open source driver on

Tensorflow GPU installation Error Windows 10 Anaconda

吃可爱长大的小学妹 提交于 2019-12-12 01:56:45
问题 I am having a tough time trying to setup Tensorflow for GPU use. I am no Windows 10, have already downloaded CUDA® Toolkit 8.0, cuDNN v5.1. and uninstalled Visual C++ 2015 redistributable and reinstalled as suggested by On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error but this did not have any effect. I am also not really sure about the PATH or if everything is included correctly there. Here is the error I keep getting (sorry it is not properly

running NVENC sdk sample get error because there is not libnvidia-encode

ぐ巨炮叔叔 提交于 2019-12-11 21:26:56
问题 when I want to make nvEncodeApp NVENC SDK sample on centos 6.4 I got this error : /usr/bin/ld: cannot find -lnvidia-encode when I checked Make file the path to this library was here : -L/usr/lib64 -lnvidia-encode -ldl I checked /usr/lib64 but there is not any libnvidia-encode there: how this library will add to this path ,whats this library ? Using nvidia-smi should tell you that: nvidia-smi Tue Jul 16 20:19:20 2013 +------------------------------------------------------+ | NVIDIA-SMI 4.304

clinfo device cpu-gpu info [closed]

女生的网名这么多〃 提交于 2019-12-11 17:55:01
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 years ago . any one can tell me why Max work items for my gpu less than cpu and compute unit ??? is that mean performance for cpu is better than gpu cpu : intel core i7 2.2GH gpu : amd radeon hd 6700M Number of platforms: 2 Platform Profile: FULL_PROFILE Platform Version: OpenCL 1.2 AMD-APP (1084.2) Platform Name: AMD

Increasing CPU rendering load in game code

懵懂的女人 提交于 2019-12-11 15:59:09
问题 This is a quasi-coding question, yes, but specifically I'm using Xna if that helps... I have a game. The AI and game-rules logic is so efficient that I have a crap ton of CPU cycles available. My question is: can I make my rendering code more complex so it looks better and uses those free CPU cycles? Or is the rendering loop only so complex on the CPU side for any quality of rendering -- and therefore any graphics quality or frame-rate boost would have to come from the GPU? Thanks for any

How to enable CUDA 5.0 in opencv v2.4.4 and VC10 without CMake and solve error 'missing cudart32_42_9.dll'?

醉酒当歌 提交于 2019-12-11 13:51:59
问题 This is my first post, please accept my apologies if I am unclear or fail to completely abide with posting rules. I have in any case sought far and wide in prep for my own question. Working with: Windows 7 Enterprise version 6.1.7600 Intel Xeon CPU Quadcore 3.07GHz NVidia Quadro 4000 GPU CUDA v5.0 Toolkit for Windows x64 build OpenCV v2.4.4 OpenCV Cuda Package belonging to opencv v2.4.4 Microsoft Visual Studios C++ 2010 Express ('vc10') (!) Without CMake (!) steps, tutorials & checks I've

OpenCV gpu::dft distorted image after inverse transform

只愿长相守 提交于 2019-12-11 13:11:46
问题 I'm working on GPU implementation of frequency filtering of an image. My code works great on CPU (I used something like this) but I have spent whole day trying to make the same work on GPU - without success. I want to apply a filter in the frequency domain hence I need the full (complex) result of the forward transform. I have read that I need to pass two complex matrices (src and dst) to forward dft to obtain full spectrum (32FC2). However, I fail to obtain the same image after inverse

Renderscript rs.finish(), allocation.syncAll(), copyTo() : wait till kernel execution finishes

元气小坏坏 提交于 2019-12-11 12:12:23
问题 I am writing android renderscript code which requires back to back kernel calls (sometimes output of one kernel become input of other). I also have some global pointers, binded to memory from Java layer. Each kernel updates those global pointers and outputs something. I hav e to make sure that execute of kernel1 is finished, before kernel2 starts execution. I looked at android renderscript docs, but couldn't understand syncAll(Usage) and finish() well. Can anyone clarify how to achieve this

PyOpenCL returns errors the first run, then only 'invalid program' errors; examples also not working

家住魔仙堡 提交于 2019-12-11 11:57:23
问题 I am trying to run an OpenCL kernel using the pyOpenCL bindings, to run on the GPU. I was trying to load the kernel to my program. I ran my program once and got an error. I ran it again without changing the code and got a different, 'invalid program' error. This keeps happening to my own programs using pyOpenCL and also on example programs. I am able to use OpenCL through the C++ bindings, on both the CPU and GPU, with no problems. So I think this is a problem specific to the pyOpenCL

OpenCL efficient way to group a lower triangular matrix

冷暖自知 提交于 2019-12-11 11:43:51
问题 I'm sure someone has come across this problem before, basically I have a 2D optimisation grid NxM in size, with the constraint that n_i <= m_i , i.e I only want to calculate the pairs in the lower triangular section of the matrix. At the moment I naively just implement all NxM combinations in a N local groups of M work groups (and then use localGroupID and workGroupID to identify the pair), and then return -inf if the constraint fails to save computation. But is there a better way to set up