The physical gestures that operate music instruments are responsible for the qualities of the sound being produced in a performance. Gestural information is thereby crucial for a model of music performance, paired with a model of sound synthesis where this information is applied. The highly constrained nature of performers gestures makes this task suitable to be modeled via a constraint-based approach, coupled with a strategy aimed at maximizing the gestural comfort of performers. We illustrate the problem representation, the search strategy and a validation of the model against human performance.
Daniele P. Radicioni, Vincenzo Lombardo