Sample from multivariate normal/Gaussian distribution in C++

匿名 (未验证) 提交于 2019-12-03 01:29:01

问题:

I've been hunting for a convenient way to sample from a multivariate normal distribution. Does anyone know of a readily available code snippet to do that? For matrices/vectors, I'd prefer to use Boost or Eigen or another phenomenal library I'm not familiar with, but I could use GSL in a pinch. I'd also like it if the method accepted nonnegative-definite covariance matrices rather than requiring positive-definite (e.g., as with the Cholesky decomposition). This exists in MATLAB, NumPy, and others, but I've had a hard time finding a ready-made C/C++ solution.

If I have to implement it myself, I'll grumble but that's fine. If I do that, Wikipedia makes it sound like I should

  1. generate n 0-mean, unit-variance, independent normal samples (boost will do this)
  2. find the eigen-decomposition of the covariance matrix
  3. scale each of the n samples by the square-root of the corresponding eigenvalue
  4. rotate the vector of samples by pre-multiplying the scaled vector by the matrix of orthonormal eigenvectors found by the decomposition

I would like this to work quickly. Does someone have an intuition for when it would be worthwhile to check to see if the covariance matrix is positive, and if so, use Cholesky instead?

回答1:

Since this question has garnered a lot of views, I thought I'd post code for the final answer that I found, in part, by posting to the Eigen forums. The code uses Boost for the univariate normal and Eigen for matrix handling. It feels rather unorthodox, since it involves using the "internal" namespace, but it works. I'm open to improving it if someone suggests a way.

#include  #include  #include       /*   We need a functor that can pretend it's const,   but to be a good random number generator    it needs mutable state. */ namespace Eigen { namespace internal { template  struct scalar_normal_dist_op  {   static boost::mt19937 rng;    // The uniform pseudo-random algorithm   mutable boost::normal_distribution norm;  // The gaussian combinator    EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)    template   inline const Scalar operator() (Index, Index = 0) const { return norm(rng); } };  template boost::mt19937 scalar_normal_dist_op::rng;  template struct functor_traits > { enum { Cost = 50 * NumTraits::MulCost, PacketAccess = false, IsRepeatable = false }; }; } // end namespace internal } // end namespace Eigen  /*   Draw nn samples from a size-dimensional normal distribution   with a specified mean and covariance */ void main()  {   int size = 2; // Dimensionality (rows)   int nn=5;     // How many samples (columns) to draw   Eigen::internal::scalar_normal_dist_op randN; // Gaussian functor   Eigen::internal::scalar_normal_dist_op::rng.seed(1); // Seed the rng    // Define mean and covariance of the distribution   Eigen::VectorXd mean(size);          Eigen::MatrixXd covar(size,size);    mean   cholSolver(covar);    // We can only use the cholesky decomposition if    // the covariance matrix is symmetric, pos-definite.   // But a covariance matrix might be pos-semi-definite.   // In that case, we'll go to an EigenSolver   if (cholSolver.info()==Eigen::Success) {     // Use cholesky solver     normTransform = cholSolver.matrixL();   } else {     // Use eigen solver     Eigen::SelfAdjointEigenSolver<:matrixxd> eigenSolver(covar);     normTransform = eigenSolver.eigenvectors()                     * eigenSolver.eigenvalues().cwiseSqrt().asDiagonal();   }    Eigen::MatrixXd samples = (normTransform                             * Eigen::MatrixXd::NullaryExpr(size,nn,randN)).colwise()                             + mean;    std::cout 


回答2:

Here is a class to generate multivariate normal random variables in Eigen which uses C++11 random number generation and avoids the Eigen::internal stuff by using Eigen::MatrixBase::unaryExpr():

struct normal_random_variable {     normal_random_variable(Eigen::MatrixXd const& covar)         : normal_random_variable(Eigen::VectorXd::Zero(covar.rows()), covar)     {}      normal_random_variable(Eigen::VectorXd const& mean, Eigen::MatrixXd const& covar)         : mean(mean)     {         Eigen::SelfAdjointEigenSolver<:matrixxd> eigenSolver(covar);         transform = eigenSolver.eigenvectors() * eigenSolver.eigenvalues().cwiseSqrt().asDiagonal();     }      Eigen::VectorXd mean;     Eigen::MatrixXd transform;      Eigen::VectorXd operator()() const     {         static std::mt19937 gen{ std::random_device{}() };         static std::normal_distribution dist;          return mean + transform * Eigen::VectorXd{ mean.size() }.unaryExpr([&](auto x) { return dist(gen); });     } }; 

It can be used as

int size = 2; Eigen::MatrixXd covar(size,size); covar 


回答3:

What about doing an SVD and then checking if the matrix is PD? Note that this does not require you to compute the Cholskey factorization. Although, I think SVD is slower than Cholskey, but they must both be cubic in number of flops.



易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!