Sciweavers

551 search results - page 57 / 111
» Multimodal Speech Synthesis
Sort
View
113
Voted
TSD
2004
Springer
15 years 10 months ago
Multimodal Phoneme Recognition of Meeting Data
This paper describes experiments in automatic recognition of context-independent phoneme strings from meeting data using audiovisual features. Visual features are known to improve ...
Petr Motlícek, Jan Cernocký
136
Voted
IUI
2010
ACM
16 years 1 months ago
Usage patterns and latent semantic analyses for task goal inference of multimodal user interactions
This paper describes our work in usage pattern analysis and development of a latent semantic analysis framework for interpreting multimodal user input consisting speech and pen ge...
Pui-Yu Hui, Wai Kit Lo, Helen M. Meng
TCSV
2011
14 years 11 months ago
Concept-Driven Multi-Modality Fusion for Video Search
—As it is true for human perception that we gather information from different sources in natural and multi-modality forms, learning from multi-modalities has become an effective ...
Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo
181
Voted
IADIS
2003
15 years 6 months ago
Multimodal Interaction and Access to Complex Data
Today’s users want to access their data everywhere and any time – in various environments and occasions. The data itself can be very complex – the problem is then in providi...
Vladislav Nemec, Pavel Zikovsky, Pavel Slaví...
ICMCS
2006
IEEE
118views Multimedia» more  ICMCS 2006»
15 years 10 months ago
Remote Voice Acquisition in Multimodal Surveillance
Multimodal surveillance systems using visible/IR cameras and other sensors are widely deployed today for security purpose, particularly when subjects are at a large distance. Howe...
Weihong Li, Zhigang Zhu, George Wolberg