The central role of the face in social interaction and non-verbal communication suggest we explore facial action as a means of musical expression. This paper presents the design, implementation, and preliminary studies of a novel system utilizing face detection and optic flow algorithms to associate facial movements with sound synthesis in a topographically specific fashion. We report on our experience with various gesture-to-sound mappings and applications, and describe our preliminary experiments at musical performance using the system. Keywords Video-based musical interface; gesture-based interaction; facial expression; facial therapy interface.
Mathias Funk, Kazuhiro Kuwabara, Michael J. Lyons