gpu

Simple char assignment not working in CUDA

こ雲淡風輕ζ 提交于 2019-12-11 04:56:46
问题 Please look at the below code which does a simple char assignment __global__ void seehowpointerwork(char* gpuHello, char* finalPoint){ char* temp; bool found = false; for(int i = 0 ; i < 11; i++){ if(gpuHello[i] == ' '){ temp = &gpuHello[i+1]; found = true; break; } } bool sth = found; finalPoint = temp; } int main() { // Testing one concept; string hello = "Hello World"; char* gpuHello; cudaMalloc((void**)&gpuHello, 11 * sizeof(char)); cudaMemcpy(gpuHello, hello.c_str(), 11 * sizeof(char),

C# CPU and GPU Temp

南笙酒味 提交于 2019-12-11 04:05:03
问题 I'm in the process of creating a personal monitoring program for system performance, and I'm having issues figuring out how C# retrieves CPU and GPU Temperature information. I already have the program retrieve the CPU Load and Frequency information(as well as various other things) through PerformanceCounter, but I haven't been able to find the Instance, Object,and Counter variables for CPU temp. Also, I need to be able to get the temperature of more than one GPU, as I have two. What do I do?

how to choose designated GPU to run CUDA program?

流过昼夜 提交于 2019-12-11 03:59:59
问题 My PC (ubuntu 12.04 x86 with CUDA 6.0) have 2 GPUs, I have some CUDA programs, and I have a program written in python to manage them. For example, I want to select one GPU to run some CUDA programs and select the other one to run the other CUDA programs. But the management process is outside the CUDA code, so I can not use "cudaSetDevice" API inside CUDA programs. That is, the CUDA programs are unalterable, I can only select GPU outside them. Is it possible to do that? 回答1: One option is to

GPU Memory not freeing itself after CUDA script execution

流过昼夜 提交于 2019-12-11 03:58:37
问题 I am having an issue with my Graphics card retaining memory after the execution of a CUDA script (even with the use of cudaFree()). On boot the Total Used memory is about 128MB but after the script runs it runs out of memory mid execution. nvidia-sma: +------------------------------------------------------+ | NVIDIA-SMI 340.29 Driver Version: 340.29 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC

Efficiently Generate a Heat Map Style Histogram using GLSL

北战南征 提交于 2019-12-11 03:49:17
问题 I would like to generate a heat map style histogram using GLSL shaders. Specifically, I have a vector of 2D values that I want to bin into a 2D grid, where each cell is a bin for a specific range of (x,y) values, and it's color is determined by how many values are binned into it. I can easily assign values to cells in the vertex shader, or compute shader. How can I also write to a 2D buffer the frequencies in each bin/cell and then assign color and render to texture accordingly? 来源: https:/

GPU YUV to RGB. Worth the effort?

江枫思渺然 提交于 2019-12-11 03:47:47
问题 I have to convert several full PAL videos (720x576@25) from YUV 4:2:2 to RGB, in real time, and probably a custom resize for each. I have thought of using the GPU, as I have seen some example that does just this (except that it's 4:4:4 so the bpp is the same in source and destiny)-- http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c However, I don't have any experience with using GPU's and I'm not sure of what can be done. The example, as I understand it, just converts the video frame to

GPU gives no performance improvement in Julia set computation

北战南征 提交于 2019-12-11 03:35:35
问题 I am trying to compare performance in CPU and GPU. I have CPU : Intel® Core™ i5 CPU M 480 @ 2.67GHz × 4 GPU : NVidia GeForce GT 420M I can confirm that GPU is configured and works correctly with CUDA. I am implementing Julia set computation. http://en.wikipedia.org/wiki/Julia_set Basically for every pixel, if the co-ordinate is in the set it will paint it red else paint it white. Although, I get identical answer with both CPU and GPU but instead of getting a performance improvement, I get a

Android - capture exactly what the screen displays (video/stream) - save it as image on device

做~自己de王妃 提交于 2019-12-11 03:19:21
问题 I am working on an application which rates the content of a view. I use a webview and connect to a server where either a livestream from a camera or a youtube video is being played. In another area the user is supposed to touch and rate what he sees. On Touchdown I would like to create an image of the current state of the (web)view which is connected to the server. getDrawingCache(); does not do the job, as it returns only a black rectangle for the content. I think I need something which

OpenCL multiple command queue for Concurrent NDKernal Launch

筅森魡賤 提交于 2019-12-11 03:13:12
问题 I m trying to run an application of vector addition, where i need to launch multiple kernels concurrently, so for concurrent kernel launch someone in my last question advised me to use multiple command queues. which i m defining by an array context = clCreateContext(NULL, 1, &device_id, NULL, NULL, &err); for(i=0;i<num_ker;++i) { queue[i] = clCreateCommandQueue(context, device_id, 0, &err); } I m getting an error "command terminated by signal 11" some where around the above code. i m using

Limitations of work-item load in GPU? CUDA/OpenCL

只谈情不闲聊 提交于 2019-12-11 02:57:08
问题 I have a compute-intensive image algorithm that, for each pixel, needs to read many distant pixels. The distance is dependent on a constant defined at compile-time. My OpenCL algorithm performs well, but at a certain maximum distance - resulting in more heavy for loops - the driver seems to bail out. The screen goes black for a couple of seconds and then the command queue never finishes. A balloon message reveals that the driver is unhappy: "Display driver AMD driver stopped responding and