Using random numbers with GPUs

北城余情 提交于 2019-12-03 00:03:00

The GSL manual recommends the Mersenne Twister.

The Mersenne Twister authors have a version for Nvidia GPUs. I looked into porting this to the R package gputools but found that I needed excessively large number of draws (millions, I think) before the combination of 'generate of GPU and make available to R' was faster than just drawing in R (using only the CPU).

It really is a computation / communication tradeoff.

Massive parallel random generation as you need it for GPUs is a difficult problem. This is an active research topic. You really have to be careful not only to have a good sequential random generator (these you find in the literature) but something that guarantees that they are independent. Pairwise independence is not sufficient for a good Monte Carlo simulation. AFAIK there is no good public domain code available.

My colleagues and I have a preprint, to appear in the SC11 conference that revisits an alternative technique for generating random numbers that is well-suited to GPUs. The idea is that the nth random number is:

x_n = f(n) 

In contrast to the conventional approach where

x_n = f(x_{n-1})

Source code is available, which implements several different generators. offering 2^64 or more streams, each with periods of 2^128 or more. All pass a wide assortment of tests (the TestU01 Crush and BigCrush suites) of both intra-stream and inter-stream statistical independence. The library also includes adapters that allow you to use our generators in a GSL framework.

I've just found that NAG provide some RNG routines. These libraries are free for academics.

Use the Mersenne Twister PRNG, as provided in the CUDA SDK.

Here we use sobol sequences on the GPUs.

You will have to implement them by yourself.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!