Summing RVs using PYMC3

我的梦境 提交于 2020-03-23 08:00:38

问题


I am attempting to implement the model from the image. I am new to PyMC3 and I am not sure how to structure the model correctly. My attempt is below:

# sample data
logprem = np.array([8.66768002, 8.49862181, 8.60410456, 8.54966038, 8.55910259,
                    8.56216656, 8.51559191, 8.60630237, 8.56140145, 8.50956416])

with Model() as model:
    logelr = Normal('logelr', -0.4, np.sqrt(10), shape=10)

    α0 = 0
    β9 = 0

    α = Normal('α', 0, sigma=np.sqrt(10), shape=9)
    β = Normal('β', 0, sigma=np.sqrt(10), shape=9)

    a = Uniform('a', 0, 1, shape=10)

    σ0 = a[0] + a[1] + a[2] + a[3] + a[4] + a[5] + a[6] + a[7] + a[8] + a[9]
    σ1 = a[1] + a[2] + a[3] + a[4] + a[5] + a[6] + a[7] + a[8] + a[9]
    σ2 = a[2] + a[3] + a[4] + a[5] + a[6] + a[7] + a[8] + a[9]
    σ3 = a[3] + a[4] + a[5] + a[6] + a[7] + a[8] + a[9]
    σ4 = a[4] + a[5] + a[6] + a[7] + a[8] + a[9]
    σ5 = a[5] + a[6] + a[7] + a[8] + a[9]
    σ6 = a[6] + a[7] + a[8] + a[9]
    σ7 = a[7] + a[8] + a[9]
    σ8 = a[8] + a[9]
    σ9 = a[9]

    σ = [σ0, σ1, σ2, σ3, σ4, σ5, σ6, σ7, σ8, σ9]

    for w in range(10):
        for d in range(10):
            if w == 0:
                if d == 9:
                    μ = logprem[w] + logelr + α0 + β9
                else:
                    μ = logprem[w] + logelr + α0 + β[d]
            else:
                if d == 9:
                    μ = logprem[w] + logelr + α[w-1] + β9
                else:
                    μ = logprem[w] + logelr + α[w-1] + β[d]

            C = Lognormal('C', μ, σ[d])

Running this leads to a TypeError

TypeError: For compute_test_value, one input test value does not have the requested type.

The error when converting the test value to that variable type:
Wrong number of dimensions: expected 0, got 1 with shape (10,).

How to I define C with the correct shape?


回答1:


The correct shape for C is (W,D) and since underlying this all is a Theano computational graph on Tensor objects, it is best to avoid loops and restrict oneself to theano.tensor operations. Here is an implementation along such lines:

import numpy as np
import pymc3 as pm
import theano.tensor as tt

D = 10
W = 10

# begin with (W,1) vector, then broadcast to (W,D)
logprem = tt._shared(
    np.array(
        [[8.66768002, 8.49862181, 8.60410456, 8.54966038, 8.55910259,
          8.56216656, 8.51559191, 8.60630237, 8.56140145, 8.50956416]]) \
      .T \
      .repeat(D, axis=1))

with pm.Model() as model:
    logelr = pm.Normal('logelr', -0.4, np.sqrt(10))

    # col vector
    alpha = pm.Normal("alpha", 0, sigma=np.sqrt(10), shape=(W-1,1))

    # row vector
    beta = pm.Normal("beta", 0, sigma=np.sqrt(10), shape=(1,D-1))

    # prepend zero and broadcast to (W,D)
    alpha_aug = tt.concatenate([tt.zeros((1,1)), alpha], axis=0).repeat(D, axis=1)

    # append zero and broadcast to (W,D)
    beta_aug = tt.concatenate([beta, tt.zeros((1,1))], axis=1).repeat(W, axis=0)

    # technically the indices will be reversed
    # e.g., a[0], a[9] here correspond to a_10, a_1 in the paper, resp.
    a = pm.Uniform('a', 0, 1, shape=D)

    # Note: the [::-1] sorts it in the order specified 
    # such that (sigma_0 > sigma_1 > ... )
    sigma = pm.Deterministic('sigma', tt.extra_ops.cumsum(a)[::-1].reshape((1,D)))

    # now everything here has shape (W,D) or is scalar (logelr)
    mu = logprem + logelr + alpha_aug + beta_aug

    # sigma will be broadcast automatically
    C = pm.Lognormal('C', mu=mu, sigma=sigma, shape=(W,D))

The key tricks are

  • prepending and appending the zeros onto alpha and beta, allow everything to be kept in a tensor form
  • using the tt.extra_ops.cumsum method to concisely express step 5;
  • getting all the terms in Step 6 to have the shape (W,D)

This could be simplified even further if one could perform an outer product between the alpha and beta vectors using an addition operator (e.g., in R the outer function allows arbitrary ops) but I couldn't find such a method under theano.tensor.

This doesn't really sample well using NUTS, but perhaps it might be better once you have actually observed values of C to plug in.

with model:
    # using lower target_accept and tuning throws divergences
    trace = pm.sample(tune=5000, draws=2000, target_accept=0.99)

pm.summary(trace, var_names=['alpha', 'beta', 'a', 'sigma'])

Since this is just the prior sampling, the only thing actually interesting are the distributions of the transformed variable sigma:

pm.plot_forest(trace, var_names=['sigma'])

which one can see coheres with the requirement that sigma_{d} > sigma_{d+1}.



来源:https://stackoverflow.com/questions/60407008/summing-rvs-using-pymc3

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!