Generating random numbers: CPU vs GPU, which currently wins?

丶灬走出姿态 提交于 2019-12-01 02:27:21

问题


I've been working on a physics simulations requiring the generation of a large amount of random numbers (at least 10^13 if you want an idea). I've been using the C++11 implementation of the Mersenne twister. I've also read that GPU implementation of this same algorithm are now a part of Cuda libraries and that GPU can be extremely efficient at this task; but I couldn't find explicit numbers or a benchmark comparison. For example compared to an 8 cores i7, are Nvidia cards of the last generations more performant in generating random numbers? If yes, how much and in which price range?

I'm thinking that my simulation could gain from having a GPU generating a huge pile of random numbers and the CPU doing the rest.


回答1:


Some comparisons can be found here: https://developer.nvidia.com/cuRAND




回答2:


If you have a new enough Intel CPU (IvyBridge or newer), you can use the RDRAND instruction.

This can be used via the _rdrand16_step(), _rdrand32_step() and _rdrand64_step() intrinsic functions.

Available via VS2012/13, Intel compiler and gcc.

The generated random number is originally seeded on a real random number. Designed for NIST SP 800-90A compliance, its randomness is very high.

Some numbers for reference:

On an IvyBridge dual core laptop with HT (2.3GHz), 2^32 (4 Gigs) random 32bit numbers took 5.7 seconds for single thread and 1.7 seconds with OpenMP.



来源:https://stackoverflow.com/questions/20970643/generating-random-numbers-cpu-vs-gpu-which-currently-wins

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!