[??]Substate Tying With Combined Parameter Training and Reduction in an HMM

To conclude the TAPE courses (TAPE stands for Traitement Automatique de la Parole et de l'Écrit, or speech processing and hand-writing recognition) at the ENST, I had to study a document by Liang Gu and Kenneth Rose entitled: Substate Tying With Combined Parameter Training and Reduction in Tied-Mixture HMM Design.

Here is an extract from the abstract found in the beginning of the document:

Two approaches are proposed for the design of tied-mixture hidden Markov models (TMHMM). One approach improves parameter sharing via partial tying of TMHMM states. To facilitate tying at the substate level, the state emission probabilities are constructed in two stages or, equivalently, are viewed as a mixture of mixtures of Gaussians. (...)

Another approach to enhance model training is combined training and reduction of model parameters. The procedure starts by training a system with a large universal codebook of Gaussian densities. It then iteratively reduces the size of both the codebook and the mixing coefficient matrix, followed by parameter re-training. The additional cost in design complexity is modest. (...)

When the two proposed approaches were integrated, 25% error rate reduction over TMHMM with whole-state tying was achieved.

Link Size Description
pdf icon 824 k

My slides (in French)

pdf icon 304 k

The article by Gu and Rose