ati

Ubuntu 12.04 /usr/bin/ld: error: cannot find -lGL

99封情书 提交于 2019-12-24 19:23:22
问题 I istalled the fglrx ATI/AMD proprietary driver and now when i try to launch my OpenGL/SDL project i receive this message: /usr/bin/ld: error: cannot find -lGL I run Ubuntu 12.04 desktop, 64-bit. HD6870 [ATI Radeon HD 6800 Series] 回答1: Installing fglrx-glx should help you, those are libGL's for proprietary AMD/ATI drivers. 来源: https://stackoverflow.com/questions/11867913/ubuntu-12-04-usr-bin-ld-error-cannot-find-lgl

JOGL - monitor GPU memory

可紊 提交于 2019-12-24 12:13:08
问题 I am looking for some JOGL classes/methods/examples to retrieve the size of total available GPU memory and the current available GPU memory. I know it can be done using OpenGl (JOGL Java docs). 回答1: The link you posted uses NVidia proprietary extensions. However given the way modern GPUs operate it's absolutely useless to know how much "memory" there's left. Why? Because OpenGL always operated on an abstract memory model. Single data objects (textures, VBOs) may be too large to fit into the

ATI ADL - AdapterInfo_Get

杀马特。学长 韩版系。学妹 提交于 2019-12-13 04:44:26
问题 Good evening guys! I'm currently trying to work with a Delphi pascal translation of the ATI ADL structure. In brief, this allows me to retrieve information from an ATI/AMD GPU in a system, and potentially control various aspects of it (such as clock and fan speeds). The translation was taken from Delphi-Praxis (Google Translated) or Delphi-Praxis (not translated) and the provided example application works. I successfully transferred it over to a visual/GUI application, but i'm having trouble

ATI OpenCL SDK on OSX

拥有回忆 提交于 2019-12-11 03:28:24
问题 I am owning new MPB with ATI-GK. I'am curios, whether i can download the sdk, special the example collection and profiler, for OSX or I have to run Windows/Linux nativelly, because i have found only versions for windows and linux? Thanks in advance. 回答1: As long as you have Mac OSX 10.6 or above (which you do if you have a new Macbook Pro), you already have OpenCL installed, under something like /Developer/GPU Computing/OpenCL. 来源: https://stackoverflow.com/questions/5794627/ati-opencl-sdk-on

OpenGL render difference between nVidia and ATI

心已入冬 提交于 2019-12-10 14:59:42
问题 Recently I updated ATI drivers (Iam using HD7970) to the newest one and some of my OpenGL project's objetcs stopped working. What is more they work on nVidia newest drivers (tested on 960m). Is there any difference between ATI and nVidia rendering pipeline that I should know? Additional info: No error from glGetError(), Shaders compiled and linked properly, Other render objects works fine but VBO populating and drawing commands are different. Working one are loaded from *.obj file and draw by

PyOpenCL Matrix multiplication

∥☆過路亽.° 提交于 2019-12-06 11:37:45
问题 I have this code for matrix multiplication using pyopenCL. My problem is that the result is wrong in some matrices, and I dont understand why. After some research i think its related with global size of something like that but i dont understand how to set that values. For example: matrices using numpy dtype = float32 matrix 1: [[ 0.99114645 0.09327769 0.90075564 0.8913309 ] [ 0.59739089 0.13906649 0.94246316 0.65673178] [ 0.24535166 0.68942326 0.41361505 0.5789603 ] [ 0.31962237 0.17714553 0

字符画

喜夏-厌秋 提交于 2019-12-05 08:58:20
字符画 通過樣例猜題意系列 注意讀題:每輸出完一行需要恢復背景色 這題用cout瘋狂爆零什麽鬼 #include <iostream> #include <string> #include <cstdio> using namespace std; int A[2004][2003][3]; int n, m, p, q; string color; int ati(char c) { if (c >= 'A' && c <= 'Z') { return c - 'A' + 10; } else return c - '0'; } char* reset = "\\x1B\\x5B\\x30\\x6D"; char* ESC = "\\x1B"; void ito(int x) { //printf("\\x%02X",x); //cout << "\\x"; char s[3] ; s[2]='\0'; int a = x % 16; x /= 16; int b = x % 16; if (a >= 10) { s[1] = char(a - 10 + 'A'); } else s[1] = char(a + '0'); if (b >= 10) { s[0] = char(b - 10 + 'A'); } else s[0] = char(b + '0'); printf("

What is the actual number of vertex uniform components for GLSL shader on ATI graphics card?

ぃ、小莉子 提交于 2019-12-03 16:58:01
问题 I'm writing a GLSL vertex shader for an iMac with a AMD Radeon HD 6970M 2048 MB graphics card: GL_MAX_VERTEX_ATTRIBS: 16 GL_MAX_VERTEX_UNIFORM_COMPONENTS: 4096 GL_VERSION: 2.1 ATI-7.12.9 GL_SHADING_LANGUAGE_VERSION: 1.20 In my shader I would like to have a large array of uniform mat4s: uniform mat4 T[65] but if I try to have 65 of these my shader (secretly) switches to Apple Software Renderer mode. If I instead use 64: uniform mat4 T[64] everything is fine. Seems to be a problem with

What is the actual number of vertex uniform components for GLSL shader on ATI graphics card?

▼魔方 西西 提交于 2019-12-03 06:06:41
I'm writing a GLSL vertex shader for an iMac with a AMD Radeon HD 6970M 2048 MB graphics card: GL_MAX_VERTEX_ATTRIBS: 16 GL_MAX_VERTEX_UNIFORM_COMPONENTS: 4096 GL_VERSION: 2.1 ATI-7.12.9 GL_SHADING_LANGUAGE_VERSION: 1.20 In my shader I would like to have a large array of uniform mat4s: uniform mat4 T[65] but if I try to have 65 of these my shader (secretly) switches to Apple Software Renderer mode. If I instead use 64: uniform mat4 T[64] everything is fine. Seems to be a problem with exceeding the maximum number of uniforms. But as I wrote above I'm getting 4096 for GL_MAX_VERTEX_UNIFORM

NVIDIA vs AMD: GPGPU performance

半世苍凉 提交于 2019-12-03 01:47:09
问题 I'd like to hear from people with experience of coding for both. Myself, I only have experience with NVIDIA. NVIDIA CUDA seems to be a lot more popular than the competition. (Just counting question tags on this forum, 'cuda' outperforms 'opencl' 3:1, and 'nvidia' outperforms 'ati' 15:1, and there's no tag for 'ati-stream' at all). On the other hand, according to Wikipedia, ATI/AMD cards should have a lot more potential, especially per dollar. The fastest NVIDIA card on the market as of today,