pymc

pymc3 with custom likelihood function from kernel density estimation

时光总嘲笑我的痴心妄想 提交于 2019-12-10 10:18:54
问题 I'm trying to use pymc3 with a likelihood function derived from some observed data. This observed data doesn't fit any nice, standard distribution, so I want to define my own, based on these observations. One approach is to use kernel density estimation over the observations. This was possible in pymc2, but doesn't play nicely with the Theano variables in pymc3. In my code below I'm just generating some dummy data that is normally distributed. As my prior, I'm essentially assuming a uniform

Multinomial distribution in PyMC

旧巷老猫 提交于 2019-12-08 21:42:36
I am a newbie to pymc. I have read the required stuff on github and was doing fine till I was stuck with this problem. I want to make a collection of multinomial random variables which I can later sample using mcmc. But the best I can do is rv = [ Multinomial("rv", count[i], p_d[i]) for i in xrange(0, len(count)) ] for i in rv: print i.value i.random() for i in rv: print i.value But it is of no good since I want to be able to call rv.value and rv.random() , otherwise I won't be able to sample from it. count is a list of non-ve integers each denoting value of n for that distribution eg. a

Converting a mixture of gaussians to PyMC3

北慕城南 提交于 2019-12-08 06:43:07
问题 This question was migrated from Cross Validated because it can be answered on Stack Overflow. Migrated 5 years ago . I am trying to learn PyMC3, I want to make a simple mixture of gaussians example. I found this example and want to convert it to pymc3 but I'm currently getting an error when trying to plot the traceplot. n1 = 500 n2 = 200 n = n1+n2 mean1 = 21.8 mean2 = 42.0 precision = 0.1 sigma = np.sqrt(1 / precision) # precision = 1/sigma^2 print "sigma1: %s" % sigma1 print "sigma2: %s" %

PyMC3 - Differences in ways observations are passed to model -> difference in results?

依然范特西╮ 提交于 2019-12-08 04:50:03
问题 I'm trying to understand if there is any meaningful difference in the ways of passing data into a model - either aggregated or as single trials (note this will only be a sensical question for certain distributions e.g. Binomial). Predicting p for a yes/no trail, using a simple model with a Binomial distribution. What is the difference in the computation/results of the following models (if any)? I choose the two extremes, either passing in a single trail at once (reducing to Bernoulli) or

Bayesian Correlation with PyMC3

断了今生、忘了曾经 提交于 2019-12-07 14:05:10
问题 I'm trying to convert this example of Bayesian correlation for PyMC2 to PyMC3, but get completely different results. Most importantly, the mean of the multivariate Normal distribution quickly goes to zero, whereas it should be around 400 (as it is for PyMC2). Consequently, the estimated correlation quickly goes towards 1, which is wrong as well. The full code is available in this notebook for PyMC2 and in this notebook for PyMC3. The relevant code for PyMC2 is def analyze(data): # priors

Bayesian Probabilistic Matrix Factorization (BPMF) with PyMC3: PositiveDefiniteError using `NUTS`

北城以北 提交于 2019-12-07 06:30:13
问题 This question was migrated from Cross Validated because it can be answered on Stack Overflow. Migrated 4 years ago . I've implemented the Bayesian Probabilistic Matrix Factorization algorithm using pymc3 in Python. I also implemented it's precursor, Probabilistic Matrix Factorization (PMF). See my previous question for a reference to the data used here. I'm having trouble drawing MCMC samples using the NUTS sampler. I initialize the model parameters using the MAP from PMF, and the

Designing a simple Binomial distribution throws core dump in pymc

梦想的初衷 提交于 2019-12-07 03:32:26
I am trying to design a simple binomial distribution in pymc. However it fails with the below error, the same code works fine if I use Poisson distribution instead of binomial import pymc as pm from pymc import Beta,Binomial,Exponential import numpy as np from pymc.Matplot import plot as mcplot data = pm.rbinomial(5,0.01,size=100) p = Beta("p",1,1) observations = Binomial("obs",5,p,value=data,observed=True) model = pm.Model([p,observations]) mcmc = pm.MCMC(model) mcmc.sample(400,100,2) mcplot(mcmc) Error venki@venki-HP-248-G1-Notebook-PC:~/Desktop$ python perf_testing.py *** glibc detected ***

PYMC3 Seasonal Variables

ぐ巨炮叔叔 提交于 2019-12-07 03:04:01
问题 I'm relatively new to PYMC3 and I'm trying to implement a Bayesian Structure Time Series (BSTS) without regressors, for instance the model fit here in R. The model is as follows: I can implement the local linear trend using a GaussianRandomWalk as follows: delta = pymc3.GaussianRandomWalk('delta',mu=0,sd=1,shape=99) mu = pymc3.GaussianRandomWalk('mu',mu=delta,sd=1,shape=100) However, I'm at a loss for how to encode the seasonal variable (tau) in PYMC3. Do I need to roll a custom random walk

How do I get parameters from a posterior distribution in PyMC?

僤鯓⒐⒋嵵緔 提交于 2019-12-06 15:12:07
I have the following program written in PyMC: import pymc from pymc.Matplot import plot as mcplot def testit( passed, test_p = 0.8, alpha = 5, beta = 2): Pi = pymc.Beta( 'Pi', alpha=alpha, beta=beta) Tj = pymc.Bernoulli( 'Tj', p=test_p) @pymc.deterministic def flipper( Pi=Pi, Tj=Tj): return Pi if Tj else (1-Pi) # Pij = Pi if Tj else (1-Pi) # return pymc.Bernoulli( 'Rij', Pij) Rij = pymc.Bernoulli( 'Rij', p=flipper, value=passed, observed=True) model = pymc.MCMC( [ Pi, Tj, flipper, Rij]) model.sample(iter=10000, burn=1000, thin=10) mcplot(model) testit( 1.) It appears to be working properly,

Bayesian Correlation with PyMC3

一笑奈何 提交于 2019-12-05 21:45:04
I'm trying to convert this example of Bayesian correlation for PyMC2 to PyMC3, but get completely different results. Most importantly, the mean of the multivariate Normal distribution quickly goes to zero, whereas it should be around 400 (as it is for PyMC2). Consequently, the estimated correlation quickly goes towards 1, which is wrong as well. The full code is available in this notebook for PyMC2 and in this notebook for PyMC3 . The relevant code for PyMC2 is def analyze(data): # priors might be adapted here to be less flat mu = pymc.Normal('mu', 0, 0.000001, size=2) sigma = pymc.Uniform(