Hidden Markov model classifying a sequence in Matlab

守給你的承諾、 提交于 2019-11-30 23:18:24

The statement/case tells to build and train a hidden Markov's model having following components specially using murphyk's toolbox for HMM as per the choice:

  1. O = Observation's vector
  2. Q = States vector
  3. T = vectors sequence
  4. nex = number of sequences
  5. M = number of mixtures

Demo Code (from murphyk's toolbox):

    O = 8;          %Number of coefficients in a vector
    T = 420;         %Number of vectors in a sequence
    nex = 1;        %Number of sequences
    M = 1;          %Number of mixtures
    Q = 6;          %Number of states



data = randn(O,T,nex);

% initial guess of parameters
prior0 = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));

if 0
    Sigma0 = repmat(eye(O), [1 1 Q M]);
    % Initialize each mean to a random data point
    indices = randperm(T*nex);
    mu0 = reshape(data(:,indices(1:(Q*M))), [O Q M]);
    mixmat0 = mk_stochastic(rand(Q,M));
else
    [mu0, Sigma0] = mixgauss_init(Q*M, data, 'full');
    mu0 = reshape(mu0, [O Q M]);
    Sigma0 = reshape(Sigma0, [O O Q M]);
    mixmat0 = mk_stochastic(rand(Q,M));
end

[LL, prior1, transmat1, mu1, Sigma1, mixmat1] = ...
    mhmm_em(data, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 5);


loglik = mhmm_logprob(data, prior1, transmat1, mu1, Sigma1, mixmat1);
Amro

Here is a general outline of the approach to classifying d-dimensional sequences using hidden Markov models:

1) Training:

For each class k:

  • prepare an HMM model. This includes initializing the following:
    • a transition matrix: Q-by-Q matrix, where Q is the number of states
    • a vector of prior probabilities: Q-by-1 vector
    • the emission model: in your case the observations are 3D points so you could use a mutlivariate normal distribution (with specified mean vector and covariance matrix) or a Guassian mixture model (a bunch of MVN distributions combined using mixture coefficient)
  • after properly initializing the above parameters, you train the HMM model, feeding it the set of sequences belong to this class (EM algorithm).

2) Prediction

Next to classify a new sequence X:

  • you compute the log-likelihood of the sequence using each model log P(X|model_k)
  • then you pick the class that gave the highest probability. This is the class prediction.

As I mentioned in the comments, the Statistics Toolbox only implement discrete observation HMM models, so you will have to find another libraries or implement the code yourself. Kevin Murphy's toolboxes (HMM toolbox, BNT, PMTK3) are popular choices in this domain.

Here are some answers I posted in the past using Kevin Murphy's toolboxes:

The above answers are somewhat different from what you are trying to do here, but it's a good place to start.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!