Sciweavers

GW
2005
Springer

From Acoustic Cues to an Expressive Agent

14 years 6 months ago
From Acoustic Cues to an Expressive Agent
This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome “La Sapienza” and “University of Paris 8”. We model expressivity of the facial animation of ...
Maurizio Mancini, Roberto Bresin, Catherine Pelach
Added 27 Jun 2010
Updated 27 Jun 2010
Type Conference
Year 2005
Where GW
Authors Maurizio Mancini, Roberto Bresin, Catherine Pelachaud
Comments (0)