Sciweavers

ICMCS
2006
IEEE

Automatic Addressee Identification Based on Participants' Head Orientation and Utterances for Multiparty Conversations

14 years 5 months ago
Automatic Addressee Identification Based on Participants' Head Orientation and Utterances for Multiparty Conversations
We propose a method that uses the participants’ head orientation and utterances for automatically identifying the addressee of each utterance in face-to-face multiparty conversations, such as meetings. First, each participant’s head orientation is determined through vision-based detection and the presence/absence of utterances is extracted using the power of voices captured by microphones. Second, gaze direction (whom each participant is looking at) is estimated from just detected head orientation using the Support Vector Machine. Third, several related features such as amount and frequency of gaze and eye contact are calculated in each utterance interval. Finally, a Bayesian Network is used to classify each utterance into one of two types of utterances: (a) the speaker is addressing a single participant and (b) the speaker is addressing all participants. Experiments on addressee estimationwith 3-person conversations confirm the usefulness of our method.
Yoshinao Takemae, Shinji Ozawa
Added 11 Jun 2010
Updated 11 Jun 2010
Type Conference
Year 2006
Where ICMCS
Authors Yoshinao Takemae, Shinji Ozawa
Comments (0)