Sciweavers

INTERSPEECH
2010

HMM-based text-to-articulatory-movement prediction and analysis of critical articulators

13 years 7 months ago
HMM-based text-to-articulatory-movement prediction and analysis of critical articulators
In this paper we present a method to predict the movement of a speaker's mouth from text input using hidden Markov models (HMM). We have used a corpus of human articulatory movements, recorded by electromagnetic articulography (EMA), to train HMMs. To predict articulatory movements from text, a suitable model sequence is selected and the maximum-likelihood parameter generation (MLPG) algorithm is used to generate output articulatory trajectories. In our experiments, we find that fully context-dependent models outperform monophone and quinphone models, achieving an average
Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi
Added 19 May 2011
Updated 19 May 2011
Type Journal
Year 2010
Where INTERSPEECH
Authors Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi
Comments (0)