This paper presents a method for automatic multimodal person authentication using speech, face and visual speech modalities. The proposed method uses the motion information to loc...
In this paper we describe an approach that both creates crosslingual acoustic monophone model sets for speech recognition tasks and objectively predicts their performance without ...
Abstract. Most cognitive studies of language acquisition in both natural systems and artificial systems have focused on the role of purely linguistic information as the central co...
The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the...
Steve Lowe, Anne Demedts, Larry Gillick, Mark Mand...
Given the limitation of hearing and understanding speech for many individuals, we plan to supplement the sound of speech and speechreading with an additional informative visual in...