# hidden-markov-models

问题 I'm playing around with Hidden Markov Models for a stock market prediction problem. My data matrix contains various features for a particular security: 01-01-2001, .025, .012, .01 01-02-2001, -.005, -.023, .02 I fit a simple GaussianHMM: from hmmlearn import GaussianHMM mdl = GaussianHMM(n_components=3,covariance_type='diag',n_iter=1000) mdl.fit(train[:,1:]) With the model (λ), I can decode an observation vector to find the most likely hidden state sequence corresponding to the observation

## Decoding sequences in a GaussianHMM

## Decoding sequences in a GaussianHMM

## r msm BLAS/LAPACK routine 'DGEBAL' gave error code -3

问题 I'm trying to make a basic markov model using the package msm and things were working fine until I've suddenly started receiving the following error code. I don't know why it's suddenly started throwing this as it was working fine earlier, and I don't think I've changed anything. The error code seems to be pointing to the linear algebra library but I don't know what to do with it exactly ... Error in balance(baP$z, "S") : BLAS/LAPACK routine 'DGEBAL' gave error code -3 The code is as follows:

## Scikit Learn HMM training with set of observation sequences

问题 I had a question about how I can use gaussianHMM in the scikit-learn package to train on several different observation sequences all at once. The example is here: visualizing the stock market structure shows EM converging on 1 long observation sequence. But in many scenarios, we want to break up the observations (like training on set of sentences) with each observation sequence having a START and END state. That is, I would like to globally train on multiple observation sequences. How can one

## Scikit Learn HMM training with set of observation sequences

## hmmlearn: how to get the prediction for the hidden state probability at time T+1, given a full observation sequence 1:T

来源： https://stackoverflow.com/questions/44350447/hmmlearn-how-to-get-the-prediction-for-the-hidden-state-probability-at-time-t1

## Problems with a hidden Markov model in PyMC3

问题 To learn PyMC, I'm trying to do a simple Hidden Markov Model as shown below: with pymc3.Model() as hmm: # Transition "matrix" a_t = np.ones(num_states) T = [pymc3.Dirichlet('T{0}'.format(i), a = a_t, shape = num_states) for i in xrange(num_states)] # Emission "matrix" a_e = np.ones(num_emissions) E = [pymc3.Dirichlet('E{0}'.format(i), a = a_e, shape = num_emissions) for i in xrange(num_states)] # State models p0 = np.ones(num_states) / num_states # No shape, so each state is a scalar tensor

## Hidden markov model next state only depends on previous one state? What about previous n states?

问题 I am working on a prototype framework. Basically I need to generate a model or profile for each individual's lifestyle based on some sensor data about him/her, such as GPS, motions, heart rate, surrounding environment readings, temperature etc. The proposed model or profile is a knowledge representation of an individual's lifestyle pattern. Maybe a graph with probabilities. I am thinking to use Hidden Markov Model to implement this. As the states in HMM can be Working, Sleeping, Leisure,

## simple speech recognition methods

问题 Yes, I'm aware that speech recognition is fairly complicated (as an understatement). What I'm looking for is a method for distinguishing between maybe 20-30 phrases. An ability to split words (discrete speech is fine) would be nice, but isn't required. The software will be user-dependent(i.e. for use by me). I'm not looking for existing software, but for a good way of going about doing this myself. I've looked into various existing methods and it seems like splitting the sound into phonemes,