We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Micro...
Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Co...
Abstract. We present an interface that allows creating camera scripts and storyboards for virtual scenes through a multimodal combination of speech and gestures. Users can specify ...
In this poster, we propose the design of a multimodal robotic interaction mechanism that is intended to be used by Aphasics for storytelling. Through limited physical interaction,...
We describe an implemented system for the simulation and visualisation of the emotional state of a multimodal conversational agent called Max. The focus of the presented work lies ...
Christian Werner Becker, Stefan Kopp, Ipke Wachsmu...
The way users interact with mobile applications varies according to the context where they are. We conducted a study where users had to manipulate a multimodal questionnaire in 4 d...