The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although ...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta...
Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal inte...
This paper presents the development and evaluation of a speaker-independent audio-visual speech recognition (AVSR) system that utilizes a segment-based modeling strategy. To suppo...
Timothy J. Hazen, Kate Saenko, Chia-Hao La, James ...
We demonstrate a same-time different-place collaboration system for managing crisis situations using geospatial information. Our system enables distributed spatial decision-making...
This work explores the properties of different output modalities as notification mechanisms in the context of messaging. In particular, the olfactory (smell) modality is introdu...
In this paper, we adopt a direct modeling approach to utilize conversational gesture cues in detecting sentence boundaries, called SUs, in video taped conversations. We treat the ...
Spoken multimodal dialogue systems in which users address faceonly or embodied interface agents have been gaining ground in research for some time. Although most systems are still...
This paper reports on an NSF-funded effort now underway to integrate three learning technologies that have emerged and matured over the past decade; each has presented compelling ...
We propose an architecture for a real-time multimodal system, which provides non-contact, adaptive user interfacing for Computer-Assisted Surgery (CAS). The system, called M/ORIS ...
A novel interface system for accessing geospatial data (GeoMIP) has been developed that realizes a user-centered multimodal speech/gesture interface for addressing some of the cri...