Hidden Markov Model for multiple observed variables

拟墨画扇 提交于 2019-12-04 07:38:21

问题


I am trying to use a hidden Markov model (HMM) for a problem where I have M different observed variables (Yti) and a single hidden variable (Xt) at each time point, t. For clarity, let us assume all observed variables (Yti) are categorical, where each Yti conveys different information and as such may have different cardinalities. An illustrative example is given in the figure below, where M=3.

My goal is to train the transition,emission and prior probabilities of an HMM, using the Baum-Welch algorithm, from my observed variable sequences (Yti). Let's say, Xt will initially have 2 hidden states.

I have read a few tutorials (including the famous Rabiner paper) and went through the codes of a few HMM software packages, namely 'HMM Toolbox in MatLab' and 'hmmpytk package in Python'. Overall, I did an extensive web search and all the resources -that I could find- only cover the case, where there is only a single observed variable (M=1) at each time point. This increasingly makes me think HMM's are not suitable for situations with multiple observed variables.

  • Is it possible to model the problem depicted in the figure as an HMM?
  • If it is, how can one modify the Baum-Welch algorithm to cater for training the HMM parameters based on the multi-variable observation (emission) probabilities?
  • If not, do you know of a methodology that is more suitable for the situation depicted in the figure?

Thanks.

Edit: In this paper, the situation depicted in the figure is described as a Dynamic Naive Bayes, which -in terms of the training and estimation algorithms- requires a slight extension to Baum-Welch and Viterbi algorithms for a single-variable HMM.


回答1:


The simplest way to do this, and have the model remain generative, is to make the y_is conditionally independent given the x_is. This leads to trivial estimators, and relatively few parameters, but is a fairly restrictive assumption in some cases (it's basically the HMM form of the Naive Bayes classifier).

EDIT: what this means. For each timestep i, you have a multivariate observation y_i = {y_i1...y_in}. You treat the y_ij as being conditionally independent given x_i, so that:

p(y_i|x_i) = \prod_j p(y_ij | x_i)

you're then effectively learning a naive Bayes classifier for each possible value of the hidden variable x. (Conditionally independent is important here: there are dependencies in the unconditional distribution of the ys). This can be learned with standard EM for an HMM.

You could also, as one commenter said, treat the concatenation of the y_ijs as a single observation, but if the dimensionality of any of the j variables is beyond trivial this will lead to a lot of parameters, and you'll need way more training data.

Do you specifically need the model to be generative? If you're only looking for inference in the x_is, you'd probably be much better served with a conditional random field, which through its feature functions can have far more complex observations without the same restrictive assumptions of independence.




回答2:


I found that this can be achieved by modelling the system as a Dynamic Naive Bayes classifier (DNB), which is a slight extension of an ordinary (single-variable) HMM that can cater for multi-observation scenarios as shown in the figure.

Caution is advised in that DNB still has a hidden state and should therefore not be regarded as a direct sequential expansion of the original Naive Bayes classifier. The 'naive' in the algorithm's name originates from the fact that all observed variables are independent of each other, given the hidden state variable.

Similar to an HMM, the parameter estimations of this model can be achieved via the Baum Welch (or EM, whichever you prefer to name it) algorithm. Since the emission distribution at each time step is now the product of P(Yti|Xt) of each observed variable Yti, the forward, backward, and joint variable equations need to be slightly modified as described in section 3 of this paper by Aviles-Arriaga et al.




回答3:


The thing that you are looking for is called Structured Perceptron. Take a look at the following slid on page 42. http://www.cs.umd.edu/class/fall2015/cmsc723/slides/inclass_09.pdf




回答4:


you could model the problem using tensors structure a tensor using the two time series and then identify the HMM parameters. "Hidden Markov Model Identifiability via Tensors" is a good reference for this.

Matlab provides tensor toolbox.

fyi, I am working on a related problem so feel free to email me if you want to discuss in a more private manner




回答5:


You can try hidden semi-Markov model which is an extension of hmm. It allows each state lasting for multiple time periods.




回答6:


This paper proposed an algorithm to solve the problem



来源:https://stackoverflow.com/questions/17487356/hidden-markov-model-for-multiple-observed-variables

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!