Sciweavers

69 search results - page 11 / 14
» Multimodal expressive embodied conversational agents
Sort
View
ATAL
2010
Springer
13 years 8 months ago
How was your day?: a companion ECA
We demonstrate a "Companion" ECA, which is able to provide advice and support to the user, taking into account emotions expressed by her through dialogue. The integratio...
Marc Cavazza, Raul Santos de la Camara, Markku Tur...
ICMI
2005
Springer
136views Biometrics» more  ICMI 2005»
14 years 1 months ago
Contextual recognition of head gestures
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...
AICS
2009
13 years 5 months ago
SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts
Abstract. Producing plays, films or animations is a complex and expensive process involving various professionals and media. Our proposed software system, SceneMaker, aims to facil...
Eva Hanser, Paul McKevitt, Tom Lunney, Joan Condel...
AGENTS
2000
Springer
14 years 2 days ago
Experimental assessment of the effectiveness of synthetic personae for multi-modal e-retail applications
This paper details results of an experiment to empirically evaluate the effectiveness and user acceptability of human-like synthetic agents in a multi-modal electronic retail scen...
Helen McBreen, Paul Shade, Mervyn A. Jack, Peter J...
AAAI
2006
13 years 9 months ago
The Role of Context in Head Gesture Recognition
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...