markov-chains

How do Markov Chains work and what is memorylessness?

帅比萌擦擦* 提交于 2019-12-04 09:44:05
问题 How do Markov Chains work? I have read wikipedia for Markov Chain, But the thing I don't get is memorylessness. Memorylessness states that: The next state depends only on the current state and not on the sequence of events that preceded it. If Markov Chain has this kind of property, then what is the use of chain in markov model? Explain this property. 回答1: You can visualize Markov chains like a frog hopping from lily pad to lily pad on a pond. The frog does not remember which lily pad(s) it

Markov chain on letter scale and random text

99封情书 提交于 2019-12-03 17:23:54
I would like to generate a random text using letter frequencies from a book in a .txt file, so that each new character ( string.lowercase + ' ' ) depends on the previous one. How do I use Markov chains to do so? Or is it simpler to use 27 arrays with conditional frequencies for each letter? I would like to generate a random text using letter frequencies from a book in a txt file Consider using collections.Counter to build-up the frequencies when looping over the text file two letters at a time. How do I use markov chains to do so? Or is it simpler to use 27 arrays with conditional frequencies

Rewriting a pymc script for parameter estimation in dynamical systems in pymc3

坚强是说给别人听的谎言 提交于 2019-12-03 16:43:43
I'd like to use pymc3 to estimate unknown parameters and states in a Hodgkin Huxley neuron model. My code in pymc is based off of http://healthyalgorithms.com/2010/10/19/mcmc-in-python-how-to-stick-a-statistical-model-on-a-system-dynamics-model-in-pymc/ and executes reasonably well. #parameter priors @deterministic def HH(priors in here) #model equations #return numpy arrays that somehow contain the probability distributions as elements return V,n,m,h #Make V deterministic in one line. Seems to be the magic that makes this work. V = Lambda('V', lambda HH=HH: HH[0]) #set up the likelihood A =

Best way to calculate the fundamental matrix of an absorbing Markov Chain?

我与影子孤独终老i 提交于 2019-12-03 11:43:30
I have a very large absorbing Markov chain (scales to problem size -- from 10 states to millions) that is very sparse (most states can react to only 4 or 5 other states). I need to calculate one row of the fundamental matrix of this chain (the average frequency of each state given one starting state). Normally, I'd do this by calculating (I - Q)^(-1) , but I haven't been able to find a good library that implements a sparse matrix inverse algorithm! I've seen a few papers on it, most of them P.h.D. level work. Most of my Google results point me to posts talking about how one shouldn't use a

Simple random english sentence generator [closed]

≯℡__Kan透↙ 提交于 2019-12-03 05:46:56
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I need a simple random English sentence generator. I need to populate it with my own words, but it needs to be capable of making longer sentences that at least follow the rules of English, even if they don't make sense. I expect there are millions of them out there, so rather than re-inventing the wheel, I'm

Using Markov chains (or something similar) to produce an IRC-bot

℡╲_俬逩灬. 提交于 2019-12-03 03:23:29
问题 I tried google and found little that I could understand. I understand Markov chains to a very basic level: It's a mathematical model that only depends on previous input to change states..so sort of a FSM with weighted random chances instead of different criteria? I've heard that you can use them to generate semi-intelligent nonsense, given sentences of existing words to use as a dictionary of kinds. I can't think of search terms to find this, so can anyone link me or explain how I could

When to use a certain Reinforcement Learning algorithm?

笑着哭i 提交于 2019-12-03 02:22:23
问题 I'm studying Reinforcement Learning and reading Sutton's book for a university course. Beside the classic PD, MC, TD and Q-Learning algorithms, I'm reading about policy gradient methods and genetic algorithms for the resolution of decision problems. I have never had experience before in this topic and I'm having problems understanding when a technique should be preferred over another. I have a few ideas, but I'm not sure about them. Can someone briefly explain or tell me a source where I can

How can I make a discrete state Markov model with pymc?

╄→гoц情女王★ 提交于 2019-12-03 02:10:28
I am trying to figure out how to properly make a discrete state Markov chain model with pymc . As an example (view in nbviewer ), lets make a chain of length T=10 where the Markov state is binary, the initial state distribution is [0.2, 0.8] and that the probability of switching states in state 1 is 0.01 while in state 2 it is 0.5 import numpy as np import pymc as pm T = 10 prior0 = [0.2, 0.8] transMat = [[0.99, 0.01], [0.5, 0.5]] To make the model, I make an array of state variables and an array of transition probabilities that depend on the state variables (using the pymc.Index function)

How do Markov Chain Chatbots work?

半世苍凉 提交于 2019-12-03 00:02:24
问题 I was thinking of creating a chatbot using something like markov chains, but I'm not entirely sure how to get it to work. From what I understand, you create a table from data with a given word and then words which follow. Is it possible to attach any sort of probability or counter while training the bot? Is that even a good idea? The second part of the problem is with keywords. Assuming I can already identify keywords from user input, how do I generate a sentence which uses that keyword? I

When to use a certain Reinforcement Learning algorithm?

末鹿安然 提交于 2019-12-02 15:54:30
I'm studying Reinforcement Learning and reading Sutton's book for a university course. Beside the classic PD, MC, TD and Q-Learning algorithms, I'm reading about policy gradient methods and genetic algorithms for the resolution of decision problems. I have never had experience before in this topic and I'm having problems understanding when a technique should be preferred over another. I have a few ideas, but I'm not sure about them. Can someone briefly explain or tell me a source where I can find something about typical situation where a certain methods should be used? As far as I understand: