markov-models

Decoding sequences in a GaussianHMM

随声附和 提交于 2021-02-06 15:18:58
问题 I'm playing around with Hidden Markov Models for a stock market prediction problem. My data matrix contains various features for a particular security: 01-01-2001, .025, .012, .01 01-02-2001, -.005, -.023, .02 I fit a simple GaussianHMM: from hmmlearn import GaussianHMM mdl = GaussianHMM(n_components=3,covariance_type='diag',n_iter=1000) mdl.fit(train[:,1:]) With the model (λ), I can decode an observation vector to find the most likely hidden state sequence corresponding to the observation

Decoding sequences in a GaussianHMM

蹲街弑〆低调 提交于 2021-02-06 15:18:22
问题 I'm playing around with Hidden Markov Models for a stock market prediction problem. My data matrix contains various features for a particular security: 01-01-2001, .025, .012, .01 01-02-2001, -.005, -.023, .02 I fit a simple GaussianHMM: from hmmlearn import GaussianHMM mdl = GaussianHMM(n_components=3,covariance_type='diag',n_iter=1000) mdl.fit(train[:,1:]) With the model (λ), I can decode an observation vector to find the most likely hidden state sequence corresponding to the observation

Decoding sequences in a GaussianHMM

删除回忆录丶 提交于 2021-02-06 15:17:34
问题 I'm playing around with Hidden Markov Models for a stock market prediction problem. My data matrix contains various features for a particular security: 01-01-2001, .025, .012, .01 01-02-2001, -.005, -.023, .02 I fit a simple GaussianHMM: from hmmlearn import GaussianHMM mdl = GaussianHMM(n_components=3,covariance_type='diag',n_iter=1000) mdl.fit(train[:,1:]) With the model (λ), I can decode an observation vector to find the most likely hidden state sequence corresponding to the observation

Generating Markov transition matrix in Python

半世苍凉 提交于 2020-01-10 08:59:20
问题 Imagine I have a series of 4 possible Markovian states (A, B, C, D): X = [A, B, B, C, B, A, D, D, A, B, A, D, ....] How can I generate a Markov transformation matrix using Python? The matrix must be 4 by 4, showing the probability of moving from each state to the other 3 states. I've been looking at many examples online but in all of them, the matrix is given, not calculated based on data. I also looked into hmmlearn but nowhere I read on how to have it spit out the transition matrix. Is

Generating Markov transition matrix in Python

旧街凉风 提交于 2020-01-10 08:59:08
问题 Imagine I have a series of 4 possible Markovian states (A, B, C, D): X = [A, B, B, C, B, A, D, D, A, B, A, D, ....] How can I generate a Markov transformation matrix using Python? The matrix must be 4 by 4, showing the probability of moving from each state to the other 3 states. I've been looking at many examples online but in all of them, the matrix is given, not calculated based on data. I also looked into hmmlearn but nowhere I read on how to have it spit out the transition matrix. Is

Is there an elegant and efficient way to implement weighted random choices in golang? Details on current implementation and issues inside

扶醉桌前 提交于 2020-01-01 17:08:14
问题 tl;dr: I'm looking for methods to implement a weighted random choice based on the relative magnitude of values (or functions of values) in an array in golang. Are there standard algorithms or recommendable packages for this? Is so how do they scale? Goals I'm trying to write 2D and 3D markov process programs in golang. A simple 2D example of such is the following: Imagine one has a lattice, and on each site labeled by index (i,j) there are n(i,j) particles. At each time step, the program

Markov Model descision process in Java

雨燕双飞 提交于 2020-01-01 09:39:13
问题 I'm writing an assisted learning algorithm in Java. I've run into a mathematical problem that I can probably solve, but because the processing will be heavy I need an optimum solution. That being said, if anyone knows a optimized library that will be totally awesome, but the language is Java so that will need to be taken into consideration. The idea is fairly simple: Objects will store combination of variables such as ABDC, ACDE, DE, AE. The max number of combination will be based on how many

Fitting Markov Switching Models to data in R

[亡魂溺海] 提交于 2019-12-08 06:43:08
问题 I'm trying to fit two kinds of Markov Switching Models to a time series of log-returns using the package MSwM in R. The models I'm considering are a regression model with only an intercept, and an AR(1) model. Here is the code I'm using: library(tseries) #Prices ftse<-get.hist.quote(instrument="^FTSE", start="1984-01-03", end="2014-01-01", quote="AdjClose", compression="m") #Log-returns ftse.ret<-diff(log(ftse)) library(MSwM) #Model with only intercept mod<-lm(ftse.ret ~ 1) #Fit regime

Markov Model diagram directly from data (makovchain or deemod package?)

扶醉桌前 提交于 2019-12-06 14:33:52
问题 I want to read a bunch of factor data and create a transition matrix from it that I can visualise nicely. I found a very sweet package, called 'heemod' which, together with 'diagram' does a decent job. For my first quick-and-dirty approach, a ran a piece of Python code to get to the matrix, then used this R sniplet to draw the graph. Note that the transition probabilities come from that undisclosed and less important Python code, but you can also just assume that I calculated it on paper.

Is there an elegant and efficient way to implement weighted random choices in golang? Details on current implementation and issues inside

不羁岁月 提交于 2019-12-04 17:58:47
tl;dr: I'm looking for methods to implement a weighted random choice based on the relative magnitude of values (or functions of values) in an array in golang. Are there standard algorithms or recommendable packages for this? Is so how do they scale? Goals I'm trying to write 2D and 3D markov process programs in golang. A simple 2D example of such is the following: Imagine one has a lattice, and on each site labeled by index (i,j) there are n(i,j) particles. At each time step, the program chooses a site and moves one particle from this site to a random adjacent site. The probability that a site