pymc

Survival analysis in PyMC 3

妖精的绣舞 提交于 2019-12-03 20:10:45
问题 I tried to port simple survival model from here (the first one in introduction) form PyMC 2 to PyMC 3. However, I didn't find any equivalent to "observed" decorator and my attempt to write a new distribution failed. Could someone provide an example how is this done in PyMC 3? 回答1: This is a tricky port, and requires three new concepts: Use of the theano tensor Use of the DensityDist Passing a dict as observed This code provides the equivalent model as the PyMC2 version you linked to above:

Rewriting a pymc script for parameter estimation in dynamical systems in pymc3

坚强是说给别人听的谎言 提交于 2019-12-03 16:43:43
I'd like to use pymc3 to estimate unknown parameters and states in a Hodgkin Huxley neuron model. My code in pymc is based off of http://healthyalgorithms.com/2010/10/19/mcmc-in-python-how-to-stick-a-statistical-model-on-a-system-dynamics-model-in-pymc/ and executes reasonably well. #parameter priors @deterministic def HH(priors in here) #model equations #return numpy arrays that somehow contain the probability distributions as elements return V,n,m,h #Make V deterministic in one line. Seems to be the magic that makes this work. V = Lambda('V', lambda HH=HH: HH[0]) #set up the likelihood A =

Difficulties on pymc3 vs. pymc2 when discrete variables are involved

試著忘記壹切 提交于 2019-12-03 15:50:33
I'm updating some calculations where I used pymc2 to pymc3 and I'm having some problems with samplers behavior when I have some discrete random variables on my model. As an example, consider the following model using pymc2: import pymc as pm N = 100 data = 10 p = pm.Beta('p', alpha=1.0, beta=1.0) q = pm.Beta('q', alpha=1.0, beta=1.0) A = pm.Binomial('A', N, p) X = pm.Binomial('x', A, q, observed=True, value=data) It's not really representative of anything, it's just a model where one of the unobserved variables is discrete. When I sample this model with pymc2 I get the following results: mcmc

Stochastic Optimization in Python

混江龙づ霸主 提交于 2019-12-03 14:41:00
I am trying to combine cvxopt (an optimization solver) and PyMC (a sampler) to solve convex stochastic optimization problems . For reference, installing both packages with pip is straightforward: pip install cvxopt pip install pymc Both packages work independently perfectly well. Here is an example of how to solve an LP problem with cvxopt : # Testing that cvxopt works from cvxopt import matrix, solvers # Example from http://cvxopt.org/userguide/coneprog.html#linear-programming c = matrix([-4., -5.]) G = matrix([[2., 1., -1., 0.], [1., 2., 0., -1.]]) h = matrix([3., 3., 0., 0.]) sol = solvers

Defining a custom PyMC distribution

旧巷老猫 提交于 2019-12-03 06:18:19
This is perhaps a silly question. I'm trying to fit data to a very strange PDF using MCMC evaluation in PyMC. For this example I just want to figure out how to fit to a normal distribution where I manually input the normal PDF. My code is: data = []; for count in range(1000): data.append(random.gauss(-200,15)); mean = mc.Uniform('mean', lower=min(data), upper=max(data)) std_dev = mc.Uniform('std_dev', lower=0, upper=50) # @mc.potential # def density(x = data, mu = mean, sigma = std_dev): # return (1./(sigma*np.sqrt(2*np.pi))*np.exp(-((x-mu)**2/(2*sigma**2)))) mc.Normal('process', mu=mean, tau

Highest Posterior Density Region and Central Credible Region

安稳与你 提交于 2019-12-03 04:40:53
问题 Given a posterior p(Θ|D) over some parameters Θ, one can define the following: Highest Posterior Density Region: The Highest Posterior Density Region is the set of most probable values of Θ that, in total, constitute 100(1-α) % of the posterior mass. In other words, for a given α, we look for a p * that satisfies: and then obtain the Highest Posterior Density Region as the set: Central Credible Region: Using the same notation as above, a Credible Region (or interval) is defined as: Depending

How can I make a discrete state Markov model with pymc?

╄→гoц情女王★ 提交于 2019-12-03 02:10:28
I am trying to figure out how to properly make a discrete state Markov chain model with pymc . As an example (view in nbviewer ), lets make a chain of length T=10 where the Markov state is binary, the initial state distribution is [0.2, 0.8] and that the probability of switching states in state 1 is 0.01 while in state 2 it is 0.5 import numpy as np import pymc as pm T = 10 prior0 = [0.2, 0.8] transMat = [[0.99, 0.01], [0.5, 0.5]] To make the model, I make an array of state variables and an array of transition probabilities that depend on the state variables (using the pymc.Index function)

Highest Posterior Density Region and Central Credible Region

ε祈祈猫儿з 提交于 2019-12-02 18:05:42
Given a posterior p(Θ|D) over some parameters Θ, one can define the following: Highest Posterior Density Region: The Highest Posterior Density Region is the set of most probable values of Θ that, in total, constitute 100(1-α) % of the posterior mass. In other words, for a given α, we look for a p * that satisfies: and then obtain the Highest Posterior Density Region as the set: Central Credible Region: Using the same notation as above, a Credible Region (or interval) is defined as: Depending on the distribution, there could be many such intervals. The central credible interval is defined as a

goodness of fit in pymc and plotting discrepancies

試著忘記壹切 提交于 2019-12-02 14:53:23
问题 I'm using PYMC 2.3.4. I found terrific. Now I would like to do some goodness of the fit and plot discrpancies how shown in section 7.3 of the documentation (https://pymc-devs.github.io/pymc/modelchecking.html) In the documentation they say that you need 3 inputs for the discrepancy plot x: the data x_sim: the posterior distribution sample x_exp:expected value I can understand the first two but not the third This the code Sero=[0,1,4,2,2,7,13,17,90] Pop=[ 15,145,170,132,107,57,68,57,251] for i

goodness of fit in pymc and plotting discrepancies

这一生的挚爱 提交于 2019-12-02 11:21:47
I'm using PYMC 2.3.4. I found terrific. Now I would like to do some goodness of the fit and plot discrpancies how shown in section 7.3 of the documentation ( https://pymc-devs.github.io/pymc/modelchecking.html ) In the documentation they say that you need 3 inputs for the discrepancy plot x: the data x_sim: the posterior distribution sample x_exp:expected value I can understand the first two but not the third This the code Sero=[0,1,4,2,2,7,13,17,90] Pop=[ 15,145,170,132,107,57,68,57,251] for i in range(len(Pop)): prob[i] = pymc.Uniform(`prob_%i' % i, 0,1.0) serobservation=pymc.Binomial(