markov-chains

Convert text prediction script [Markov Chain] from javascript to python

梦想的初衷 提交于 2019-12-23 10:07:46
问题 i've been trying the last couple days to convert this js script to python code. My implementation (blindfull cp mostly, some minor fixes here and there) so far: import random class markov: memory = {} separator = ' ' order = 2 def getInitial(self): ret = [] for i in range(0, self.order, 1): ret.append('') return ret def breakText(self, txt, cb): parts = txt.split(self.separator) prev = self.getInitial() def step(self): cb(prev, self.next) prev.shift()#Javascript function. prev.append(self

How to use machine learning to calculate a graph of states from a sequence of data?

你。 提交于 2019-12-22 09:36:33
问题 Generic formulation I have a dataset consisting of a sequence of points with 12 features each. I am interested in detecting an event in this data. In the training data I know the moments the event occurred. When the event occurs I can see an observable pattern in the sequence of points before the event. The pattern is formed from about 300 consecutive points. I am interested in detecting when the event occurred in a infinite sequence of points. The analysis happens post factum. I am not

Understanding Markov Chain source code in R

断了今生、忘了曾经 提交于 2019-12-20 06:24:54
问题 The following source code is from a book. Comments are written by me to understand the code better. #================================================================== # markov(init,mat,n,states) = Simulates n steps of a Markov chain #------------------------------------------------------------------ # init = initial distribution # mat = transition matrix # labels = a character vector of states used as label of data-frame; # default is 1, .... k #----------------------------------------------

subseting dataframe conditions on factor(binary) column(vector in r language)

﹥>﹥吖頭↗ 提交于 2019-12-20 06:13:41
问题 i have a sequence of 1/0's indicating if patient is in remission or not, assume the records of remission or not were taken at discrete times, how can i check the markov property for each patient, then summarize the findings, that is the assumption that the probability of remission for any patient at any time depends only if the patient had remission the last time/not remission last time(same as thing as saying probability of remission for any patient at any time depends only if the patient

subseting dataframe conditions on factor(binary) column(vector in r language)

。_饼干妹妹 提交于 2019-12-20 06:13:08
问题 i have a sequence of 1/0's indicating if patient is in remission or not, assume the records of remission or not were taken at discrete times, how can i check the markov property for each patient, then summarize the findings, that is the assumption that the probability of remission for any patient at any time depends only if the patient had remission the last time/not remission last time(same as thing as saying probability of remission for any patient at any time depends only if the patient

Markov chain stationary distributions with scipy.sparse?

99封情书 提交于 2019-12-18 03:45:05
问题 I have a Markov chain given as a large sparse scipy matrix A . (I've constructed the matrix in scipy.sparse.dok_matrix format, but converting to other ones or constructing it as csc_matrix are fine.) I'd like to know any stationary distribution p of this matrix, which is an eigenvector to the eigenvalue 1 . All entries in this eigenvector should be positive and add up to 1, in order to represent a probability distribution. This means I want any solution for the system (A-I) p = 0 , p.sum()=1

Markov chain stationary distributions with scipy.sparse?

只谈情不闲聊 提交于 2019-12-18 03:44:33
问题 I have a Markov chain given as a large sparse scipy matrix A . (I've constructed the matrix in scipy.sparse.dok_matrix format, but converting to other ones or constructing it as csc_matrix are fine.) I'd like to know any stationary distribution p of this matrix, which is an eigenvector to the eigenvalue 1 . All entries in this eigenvector should be positive and add up to 1, in order to represent a probability distribution. This means I want any solution for the system (A-I) p = 0 , p.sum()=1

Manual simulation of Markov Chain in R

半世苍凉 提交于 2019-12-17 20:32:31
问题 Consider the Markov chain with state space S = {1, 2} , transition matrix and initial distribution α = (1/2, 1/2) . Simulate 5 steps of the Markov chain (that is, simulate X 0 , X 1 , . . . , X 5 ). Repeat the simulation 100 times. Use the results of your simulations to solve the following problems. Estimate P(X 1 = 1|X 0 = 1) . Compare your result with the exact probability. My solution: # returns Xn func2 <- function(alpha1, mat1, n1) { xn <- alpha1 %*% matrixpower(mat1, n1+1) return (xn) }

What is the probability that mouse with reach state A before state B

守給你的承諾、 提交于 2019-12-14 03:37:45
问题 Maze I have a maze as shown above(use the link) and state 3 contains prize while state 7 contains shock. a mouse can be placed in any state from 1 to 9 randomly and it move through the maze uniformly at random Pi denote the probability that mouse reaches state 3 before state 7, given that AIM started in compartment i. how to compute Pi for ∈ {1,2,3,4,5,6,7,8,9}. 回答1: Let Px be the probability that the game ends in position 3 if it starts in position x . We know that P3=1 and P7=0 If you start

Calculate probability of observing sequence using markovchain package

ぃ、小莉子 提交于 2019-12-13 16:09:36
问题 Let's use the dataset from this question: dat<-data.frame(replicate(20,sample(c("A", "B", "C","D"), size = 100, replace=TRUE))) Then we can build the transition matrix and the markov chain: # Build transition matrix trans.matrix <- function(X, prob=T) { tt <- table( c(X[,-ncol(X)]), c(X[,-1]) ) if(prob) tt <- tt / rowSums(tt) tt } trans.mat <- trans.matrix(as.matrix(dat)) attributes(trans.mat)$class <- 'matrix' # Build markovchain library(markovchain) chain <- new('markovchain',