Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalit...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu...
This paper describes our work in usage pattern analysis and development of a latent semantic analysis framework for interpreting multimodal user input consisting speech and pen ge...
Although technology for communication has evolved tremendously over the past decades, mobility impaired individuals still face many difficulties interacting with communication serv...
Carlos Galinho Pires, Fernando Miguel Pinto, Eduar...
There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this paper we argue ...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
Multimodal interfaces are designed with a focus on flexibility, although very few currently are capable of adapting to major sources of user, task, or environmental variation. The...