nvidia

Ubuntu 10.04 安装Nvidia显卡后开机花屏完美解决

≡放荡痞女 提交于 2019-12-06 08:03:19
笔记本联想Y450,Ubuntu 10.04.3 LTS 1.可以解决的问题: (1)安装驱动后开机和关机画面花屏,分辨率低; (2)开机画面直接出现,一闪而过,错过点点进度过程。 2.解决方案:uvesafb 替代 vesafb(具体什么意思我也不知道) 3.所需包: (1)v86d(uvesafb需要) (2)hwinfo(查看framebuffer) 4.步骤 (1)安装驱动: 方法一,Nvidia官网下载最新驱动,但是得先屏蔽掉nouveau,而且安装后会出现Nvidia巨大的Logo,我没有用 这种方法。 方法二,System-->Administration-->Hardware Drivers。 我使用的是第一种,不过有点问题,需要先禁用系统自带的 nouveau 步骤如下: 用官方驱动安装,将驱动拷到主文件夹,然后按Ctrl+Alt+F1退到终端并登录,sudo /etc/init.d/gdm stop,然后运行sudo sh ./Nv.......(驱动软件名),它会提示nouveau在使用,然后会问你是否建立一个文件来禁止nnouveau,选择Yes,一直点下去,会退出到终端界面,sudo reboot重新启动到GDM桌面,然后按Ctrl+Alt+F1退到终端,并登录,sudo /etc/init.d/gdm stop,退掉X桌面,再次运行sudo sh .

Yocto for Nvidia Jetson fails because of GCC 7 - cannot compute suffix of object files

大城市里の小女人 提交于 2019-12-06 07:50:12
I am trying to use Yocto with meta-tegra ( https://github.com/madisongh/meta-tegra ) to build a minimal system for the Nvidia Jetson Nano. I need to use CUDA ( current version 10 for Nano ) with OpenCV on this platform. CUDA 10 only support GCC 7, and not GCC 8. GCC 7 has be deprecated and removed from OpenEmbedded Warrior release in favor of GCC 8.3. My error has to do with trying to use GCC 7 with Warrior release of OE: configure: error: cannot compute suffix of object files: cannot compile The README for meta-tegra states the following: * CUDA 10 supports up through gcc 7 only, and some

Ubuntu kworker thread consumes 100% CPU [closed]

风格不统一 提交于 2019-12-06 06:20:20
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 years ago . I had a question and was unable to find the answer (easily). On my Ubuntu installation, a kworker thread was consuming 100% CPU, which coused my computer to be very slow or crash at times. 回答1: If you run the command: grep . -r /sys/firmware/acpi/interrupts/ and check for any high value like: /sys/firmware/acpi

L2 cache in NVIDIA Fermi

六月ゝ 毕业季﹏ 提交于 2019-12-06 05:56:17
问题 When looking at the name of the performance counters in NVIDIA Fermi architecture (the file Compute_profiler.txt in the doc folder of cuda), I noticed that for L2 cache misses, there are two performance counters, l2_subp0_read_sector_misses and l2_subp1_read_sector_misses. They said that these are for two slices of L2. Why do they have two slices of L2? Is there any relation with the Streaming Multi-processor architecture? What would be the effect of this division to the performance? Thanks

What can I do against 'CUDA driver version is insufficient for CUDA runtime version'?

好久不见. 提交于 2019-12-06 05:54:28
When I go to /usr/local/cuda/samples/1_Utilities/deviceQuery and execute moose@pc09 /usr/local/cuda/samples/1_Utilities/deviceQuery $ sudo make clean rm -f deviceQuery deviceQuery.o rm -rf ../../bin/x86_64/linux/release/deviceQuery moose@pc09 /usr/local/cuda/samples/1_Utilities/deviceQuery $ sudo make "/usr/local/cuda-7.0"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch

Disable Nvidia watchdog with OpenCL on Mac OS X 10.7.4

我怕爱的太早我们不能终老 提交于 2019-12-06 04:15:58
I have a OpenCL program which runs fine for small problems but when running larger problems exceeds the 8-10s time limit for running kernels on Nvidia hardware. Although I have no monitors attached to the GPU I am computing on (Nvidia GTX580), the kernel will always be terminated once it runs for around 8-10s. The preliminary research I did on this problem indicates that the Nvidia watchdog should only enforce the time limit if a monitor is connected to the graphics card. However I do not have any monitors connected to the GPU the OpenCl is running on yet this limit is still enforced. Is it

Linux - run android emulator on Nouveau driver

大兔子大兔子 提交于 2019-12-06 03:12:17
Linux (Debian Sid x64), kernel 4.14, Nvidia GPU. I am unable to run Android emulator on open Nouveau drivers. There is no any error message that I can post, jus segmentation fault. When I choose software rendering, it works but unusable (it runs very slow). Does anybody know any workaround for that, or I am forced to use official Nvidia drivers? 来源: https://stackoverflow.com/questions/47900233/linux-run-android-emulator-on-nouveau-driver

Why does this crash when using OpenGL core profile?

房东的猫 提交于 2019-12-06 00:04:14
When I try to run this simple OpenGL test program I get a segmentation fault. This only happens when I create the context using the core profile flag. If I use the compatibility profile flag, the program runs without issue. Edit: I checked the pointer to the function glGenVertexArrays and it returned NULL . If glfwCreateWindow doesn't return NULL , and glGetString(GL_VERSION) confirms that the context is version 4.3 and glewInit returns GLEW_OK then why is glGenVertexArrays == NULL ? My OS is Windows 7 64-bit and my GPU is a Nvidia GTX 760 with 331.82 WHQL driver. Code: #include <GL/glew.h>

Selective nvidia #pragma optionNV(unroll all)

雨燕双飞 提交于 2019-12-05 23:14:35
I'm playing around with nvidia's unroll loops directive, but haven't seen a way to turn it on selectively. Lets say I have this... void testUnroll() { #pragma optionNV(unroll all) for (...) ... } void testNoUnroll() { for (...) ... } Here, I'm assuming both loops end up being unrolled. To stop this I think the solution will involve resetting the directive after the block I want affected, for example: #pragma optionNV(unroll all) for (...) ... #pragma optionNV(unroll default) //?? However I don't know the keyword to reset the unroll behaviour to the initial/default setting. How can this be done

Solving 2d diffusion (heat) equation with CUDA

拥有回忆 提交于 2019-12-05 21:28:12
I am learning CUDA with trying to solve some standard problems. As a example, I am solving the diffusion equation in two dimensions with the following code. But my results are different than the standard results and I am not able to figure that out. //kernel definition __global__ void diffusionSolver(double* A, double * old,int n_x,int n_y) { int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; if(i*(n_x-i-1)*j*(n_y-j-1)!=0) A[i+n_y*j] = A[i+n_y*j] + (old[i-1+n_y*j]+old[i+1+n_y*j]+ old[i+(j-1)*n_y]+old[i+(j+1)*n_y] -4*old[i+n_y*j])/40; } int main() {