Renderscript and the GPU

試著忘記壹切 提交于 2020-01-11 08:47:06

问题


I know that Renderscript's design is to obscure the fact about what processor I'm running on, but is there any way to write the code such that on GPU-compute-capable devices (at the moment, Nexus 10), it will run on the GPU? Is there any way to tell that a script's function is running on the GPU?

www.leapconf.com/downloads/LihuaZhang-MulticoreWare.pdf suggests that if I don't use globals, I don't use recursion, and don't call rsDebug anywhere in a kernel, it will be run on the GPU; is that correct?

I'd love to see a short script that people have somehow verified will run on the gpu as a purely compute-based task (eg., no graphics work).


回答1:


In general, those claims about Nexus 10's behavior are correct. There are some other things about calling some of the RS runtime functions (for example, don't call rsGetAllocation) that would cause the CPU to run a function. However, I think globals that aren't allocations or bound pointers are okay in 4.2.

Going forward, a lot of those restrictions are going to be relaxed (globals being the big one).

In terms of seeing where a kernel runs: there's not much you can do with 4.2 to figure that out. We haven't seen a compelling reason to do so yet, but if this turns out to be really important it's something we could add without too much difficulty via something like systrace. Feel free to complain to us if you can demonstrate why that's hurting you beyond "I assume the code will run faster on the GPU."

I'm pretty sure that the Mandelbrot implementation in ImageProcessing (fw/base/tests/RenderScriptTests/ImageProcessing/) runs on the GPU in 4.2.



来源:https://stackoverflow.com/questions/17055539/renderscript-and-the-gpu

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!