markov-chains

What is the significance of the stationary distribution of a markov chain given it's initial state?

て烟熏妆下的殇ゞ 提交于 2019-12-13 07:08:57
问题 Let X_n be a MC, P not regular Say we have a stationary dist (pi_0, ..., pi_n) and P(X_0 = i) = 0.2, does this say anything? To be more clear: I ask because Karlin says when a stationary dist is not a limiting dist, P(X_n = i) is dependent on the initial distribution. What does this exactly mean? 回答1: Your title's question requires a lengthy answer; I'd have to just provide some references for you to read more on Markov chains and ergodic theory. However, your specific question: "...when a

Calculating Markov chain probabilities with values too large to exponentiate

[亡魂溺海] 提交于 2019-12-12 18:13:22
问题 I use the formula exp(X) as the rate for a markov chain. So the ratio of selecting one link over another is exp(X1)/exp(X2). My problem is that sometimes X is very large, so exp(X) will exceed the range of double . Alternatively: Given an array of X[i], with some X[i] so large that exp(X[i]) overflows the range of double , calculate, for each i, exp(X[i]) / S, where S is the sum of all the exp(X[i]). 回答1: This pseudo-code should work: Let M = the largest X[i]. For each i: Subtract M from X[i]

Matlab: PDF from a Markov Chain

左心房为你撑大大i 提交于 2019-12-12 00:59:16
问题 I have generated the Markov Chain using Matlab. From the generated Markov Chain, I need to calculate the probability density function (PDF). How should i do it? Should I use the generated Markov Chain directly in any of the PDF functions? or Should I do any pre-processing of the data before finding the PDF? The Markov Chain is generated using the following code: % x = the quantity corresponding to each state, typical element x(i) % P = Markov transition matrix, typical element p(i,j) i,j=1,..

Negative Binomial Mixture in PyMC

旧巷老猫 提交于 2019-12-11 06:56:04
问题 I am trying to fit a Negative binomial mixture with PyMC. It seems I do something wrong, because the predictive doesn't look at all similar to the input data. The problem is probably in the prior of the Negative binomial parameters. Any suggestions? from sklearn.cluster import KMeans import pymc as mc n = 3 #Number of components of the mixture ndata = len(data) dd = mc.Dirichlet('dd', theta=(1,)*n) category = mc.Categorical('category', p=dd, size=ndata) kme = KMeans(n) # This is not needed

finding the next states from transition matrix ? random walk Matlab simulation

。_饼干妹妹 提交于 2019-12-11 05:40:01
问题 I'm trying to model random walk mobility model in matlab I'm facing problem regarding finding the next state from a transition matrix. I have already created my state transition matrix but I dont know how to find the next state ? I know I have all the probabilities for each state from the trasition matrix but I need to actually choose based on those probability what the next state will be. can someone help me with that ? 回答1: If A is your transition matrix with rows summing to 1, then you can

Fitting a VLMC to very long sequences

霸气de小男生 提交于 2019-12-11 03:39:26
问题 I am trying to fit a VLMC to a dataset where the longest sequence is 296 states. I do it as shown below: # Load libraries library(PST) library(RCurl) library(TraMineR) # Load and transform data x <- getURL("https://gist.githubusercontent.com/aronlindberg/08228977353bf6dc2edb3ec121f54a29/raw/241ef39125ecb55a85b43d7f4cd3d58f617b2ecf/challenge_level.csv") data <- read.csv(text = x) data.seq <- seqdef(data[,2:ncol(data)], missing = NA, right = NA, nr = "*") S1 <- pstree(data.seq, ymin = 0.01, lik

Bayesian fit of cosine wave taking longer than expected

旧巷老猫 提交于 2019-12-11 00:33:53
问题 In a recent homework, I was asked to perform a Bayesian fit over a set of data a and b using a Metropolis algorithm. The relationship between a and b is given: e(t) = e_0*cos(w*t) w = 2 * pi The Metropolis algorithm is (it works fine with other fit): def metropolis(logP, args, v0, Nsteps, stepSize): vCur = v0 logPcur = logP(vCur, *args) v = [] Nattempts = 0 for i in range(Nsteps): while(True): #Propose step: vNext = vCur + stepSize*np.random.randn(*vCur.shape) logPnext = logP(vNext, *args)

What is the difference between matrixpower() and markov() when it comes to computing P^n?

佐手、 提交于 2019-12-11 00:18:47
问题 Consider a Markov chain with state space S = {1, 2, 3, 4} and transition matrix P = 0.1 0.2 0.4 0.3 0.4 0.0 0.4 0.2 0.3 0.3 0.0 0.4 0.2 0.1 0.4 0.3 And, take a look at the following source code: # markov function markov <- function(init,mat,n,labels) { if (missing(labels)) { labels <- 1:length(init) } simlist <- numeric(n+1) states <- 1:length(init) simlist[1] <- sample(states,1,prob=init) for (i in 2:(n+1)) { simlist[i] <- sample(states, 1, prob = mat[simlist[i-1],]) } labels[simlist] } #

Algorithm for computing the plausibility of a function / Monte Carlo Method

不打扰是莪最后的温柔 提交于 2019-12-09 11:27:20
问题 I am writing a program that attempts to duplicate the algorithm discussed at the beginning of this article, http://www-stat.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf F is a function from char to char. Assume that Pl(f) is a 'plausibility' measure of that function. The algorithm is: Starting with a preliminary guess at the function, say f, and a then new function f* -- Compute Pl(f). Change to f* by making a random transposition of the values f assigns to two symbols. Compute Pl(f*); if

R : function to generate a mixture distribution

心不动则不痛 提交于 2019-12-08 21:24:41
问题 I need to generate samples from a mixed distribution 40% samples come from Gaussian(mean=2,sd=8) 20% samples come from Cauchy(location=25,scale=2) 40% samples come from Gaussian(mean = 10, sd=6) To do this, i wrote the following function : dmix <- function(x){ prob <- (0.4 * dnorm(x,mean=2,sd=8)) + (0.2 * dcauchy(x,location=25,scale=2)) + (0.4 * dnorm(x,mean=10,sd=6)) return (prob) } And then tested with: foo = seq(-5,5,by = 0.01) vector = NULL for (i in 1:1000){ vector[i] <- dmix(foo[i]) }