The human ability to recognize, identify and compare sounds based on their approximation of particular vowels provides an intuitive, easily learned representation for complex data. We describe implementations of vocal tract models specifically designed for sonification purposes. The models described are based on classical models including Klatt[1] and Cook[2]. Implementation of these models in MatLab, STK[3], and PD[4] is presented. Various sonification methods were tested and evaluated using data sets of hyperspectral images of colon cells1 2 .
Ryan J. Cassidy, Jonathan Berger, Kyogu Lee