Sciweavers

76 search results - page 11 / 16
» Predicting Subjectivity in Multimodal Conversations
Sort
View
FGR
2004
IEEE
216views Biometrics» more  FGR 2004»
13 years 11 months ago
Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed ...
Jeffrey F. Cohn, Lawrence Ian Reed, Tsuyoshi Moriy...
HRI
2009
ACM
14 years 2 months ago
How to approach humans?: strategies for social robots to initiate interaction
This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in ...
Satoru Satake, Takayuki Kanda, Dylan F. Glas, Mich...
ICMI
2009
Springer
125views Biometrics» more  ICMI 2009»
14 years 2 months ago
Speaker change detection with privacy-preserving audio cues
In this paper we investigate a set of privacy-sensitive audio features for speaker change detection (SCD) in multiparty conversations. These features are based on three different...
Sree Hari Krishnan Parthasarathi, Mathew Magimai-D...
IUI
2006
ACM
14 years 1 months ago
Automatic prediction of misconceptions in multilingual computer-mediated communication
Multilingual communities using machine translation to overcome language barriers are showing up with increasing frequency. However, when a large number of translation errors get m...
Naomi Yamashita, Toru Ishida
ICMI
2009
Springer
138views Biometrics» more  ICMI 2009»
14 years 2 months ago
Dialog in the open world: platform and applications
We review key challenges of developing spoken dialog systems that can engage in interactions with one or multiple participants in relatively unconstrained environments. We outline...
Dan Bohus, Eric Horvitz