In this paper, we adopt a direct modeling approach to utilize conversational gesture cues in detecting sentence boundaries, called SUs, in video taped conversations. We treat the ...
We present a generic approach to multimodal fusion which we call context based multimodal integration. Key to this approach is that every multimodal input event is interpreted and...
Current digital documents provide few traces to help user browsing. This makes document browsing difficult, and we sometimes feel it is hard to keep track of all of the informati...
This work explores the properties of different output modalities as notification mechanisms in the context of messaging. In particular, the olfactory (smell) modality is introdu...
A novel interface system for accessing geospatial data (GeoMIP) has been developed that realizes a user-centered multimodal speech/gesture interface for addressing some of the cri...