mcmc

Rewriting a pymc script for parameter estimation in dynamical systems in pymc3

坚强是说给别人听的谎言 提交于 2019-12-03 16:43:43
I'd like to use pymc3 to estimate unknown parameters and states in a Hodgkin Huxley neuron model. My code in pymc is based off of http://healthyalgorithms.com/2010/10/19/mcmc-in-python-how-to-stick-a-statistical-model-on-a-system-dynamics-model-in-pymc/ and executes reasonably well. #parameter priors @deterministic def HH(priors in here) #model equations #return numpy arrays that somehow contain the probability distributions as elements return V,n,m,h #Make V deterministic in one line. Seems to be the magic that makes this work. V = Lambda('V', lambda HH=HH: HH[0]) #set up the likelihood A =

Difficulties on pymc3 vs. pymc2 when discrete variables are involved

試著忘記壹切 提交于 2019-12-03 15:50:33
I'm updating some calculations where I used pymc2 to pymc3 and I'm having some problems with samplers behavior when I have some discrete random variables on my model. As an example, consider the following model using pymc2: import pymc as pm N = 100 data = 10 p = pm.Beta('p', alpha=1.0, beta=1.0) q = pm.Beta('q', alpha=1.0, beta=1.0) A = pm.Binomial('A', N, p) X = pm.Binomial('x', A, q, observed=True, value=data) It's not really representative of anything, it's just a model where one of the unobserved variables is discrete. When I sample this model with pymc2 I get the following results: mcmc

MCMCglmm multinomial model in R

你离开我真会死。 提交于 2019-12-03 08:10:41
I'm trying to create a model using the MCMCglmm package in R. The data are structured as follows, where dyad, focal, other are all random effects, predict1-2 are predictor variables, and response 1-5 are outcome variables that capture # of observed behaviors of different subtypes: dyad focal other r present village resp1 resp2 resp3 resp4 resp5 1 10101 14302 0.5 3 1 0 0 4 0 5 2 10405 11301 0.0 5 0 0 0 1 0 1 … So a model with only one outcome (teaching) is as follows: prior_overdisp_i <- list(R=list(V=diag(2),nu=0.08,fix=2), G=list(G1=list(V=1,nu=0.08), G2=list(V=1,nu=0.08), G3=list(V=1,nu=0.08

Modified BPMF in PyMC3 using `LKJCorr` priors: PositiveDefiniteError using `NUTS`

给你一囗甜甜゛ 提交于 2019-12-01 03:06:57
问题 I previously implemented the original Bayesian Probabilistic Matrix Factorization (BPMF) model in pymc3 . See my previous question for reference, data source, and problem setup. Per the answer to that question from @twiecki, I've implemented a variation of the model using LKJCorr priors for the correlation matrices and uniform priors for the standard deviations. In the original model, the covariance matrices are drawn from Wishart distributions, but due to current limitations of pymc3 , the

Speed up Metropolis--Hastings in Python

时光怂恿深爱的人放手 提交于 2019-11-29 22:15:53
问题 I have some code that samples a posterior distribution using MCMC, specifically Metropolis Hastings. I use scipy to generate random samples: import numpy as np from scipy import stats def get_samples(n): """ Generate and return a randomly sampled posterior. For simplicity, Prior is fixed as Beta(a=2,b=5), Likelihood is fixed as Normal(0,2) :type n: int :param n: number of iterations :rtype: numpy.ndarray """ x_t = stats.uniform(0,1).rvs() # initial value posterior = np.zeros((n,)) for t in

MCMC

◇◆丶佛笑我妖孽 提交于 2019-11-28 02:11:19
MCMC MCMC算法 的核心思想是我们已知一个概率密度函数,需要从这个概率分布中采样,来分析这个分布的一些统计特性,然而这个这个函数非常之复杂,怎么去采样?这时,就可以借助MCMC的思想。 它与 变分自编码 不同在于:VAE是已知一些样本点,这些样本肯定是来自于同一分布,但是我们不知道这个分布函数的具体表达式,然而我们需要从这个分布中去采取新的样本,怎么采样,这时,就需要借助VAE的思想。 MCMC原理讲解 以下内容博客转自: https://www.cnblogs.com/xbinworld/p/4266146.html 背景 随机模拟也可以叫做蒙特卡罗模拟(Monte Carlo Simulation)。这个方法的发展始于20世纪40年代,和原子弹制造的曼哈顿计划密切相关,当时的几个大牛,包括乌拉姆、冯.诺依曼、费米、费曼、Nicholas Metropolis, 在美国洛斯阿拉莫斯国家实验室研究裂变物质的中子连锁反应的时候,开始使用统计模拟的方法,并在最早的计算机上进行编程实现。[3] 随机模拟中有一个重要的问题就是给定一个概率分布p(x),我们如何在计算机中生成它的样本。一般而言均匀分布 Uniform(0,1)的样本是相对容易生成的。 通过线性同余发生器可以生成伪随机数,我们用确定性算法生成[0,1]之间的伪随机数序列后,这些序列的各种统计指标和均匀分布 Uniform