Sciweavers

137 search results - page 26 / 28
» Real-Time Facial Expression Recognition for Natural Interact...
Sort
View
AIHC
2007
Springer
14 years 4 months ago
Gaze-X: Adaptive, Affective, Multimodal Interface for Single-User Office Scenarios
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modele...
Ludo Maat, Maja Pantic
ICMI
2005
Springer
170views Biometrics» more  ICMI 2005»
14 years 3 months ago
Inferring body pose using speech content
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...
Sy Bor Wang, David Demirdjian
WSDM
2010
ACM
172views Data Mining» more  WSDM 2010»
14 years 7 months ago
Early Online Identification of Attention Gathering Items In Social Media
Activity in social media such as blogs, micro-blogs, social networks, etc is manifested via interaction that involves text, images, links and other information items. Naturally, s...
Michael Mathioudakis, Nick Koudas, Peter Marbach
AVI
2008
14 years 3 days ago
Exploring emotions and multimodality in digitally augmented puppeteering
Recently, multimodal and affective technologies have been adopted to support expressive and engaging interaction, bringing up a plethora of new research questions. Among the chall...
Lassi A. Liikkanen, Giulio Jacucci, Eero Huvio, To...
HRI
2006
ACM
14 years 3 months ago
Using context and sensory data to learn first and second person pronouns
We present a method of grounded word learning that is powerful enough to learn the meanings of first and second person pronouns. The model uses the understood words in an utteran...
Kevin Gold, Brian Scassellati