Thursday, March 12, 2009
The Infinite Hidden Markov Model
This paper extends HMM with DP. In HMM, the transition matrix and the emission probability are all multinomial distributed. The Bayesian adds a Dirichlet prior for them and now we try Dirichlet instead. Their construction is from Chinese restaurant process (CRP). The generation procedure is: first it may go to any existing states (including itself) or asking an oracle (DP); if it asks the oracle, the oracle may tell it go to an existing state or send it a new state (DP). Finally it generate a sample using a DP (may generate those it has already generates or new). Therefore it is a hierarchical DP.
The learning procedure requires five hyperparameters. The posterior has to be approximated with Gibbs sampling. The Learning of hyperparameter is rather vague, not sure why they introduce another prior for the hyperparameters (maybe we can only use sampling -,-b). The model has only been tested on simulated data though. I really doubt about its applicability in practice.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment