QuickSet [6] is a multimodal system that gives users the capability to create and control map-based collaborative interactive simulations by supporting the simultaneous input from ...
Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control...
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to t...
Werner Breitfuss, Helmut Prendinger, Mitsuru Ishiz...
We present a gesture-based user interface to Free-Form Deformation (FFD). Traditional interfaces for FFD require the manipulation of individual points in a lattice of control vert...
Eye-typing performance results are reported from controlled studies comparing an on-screen keyboard and EyeWrite, a new on-screen gestural input alternative. Results from the firs...
Jacob O. Wobbrock, James Rubinstein, Michael W. Sa...