Automatically annotating or tagging unlabeled audio files has several applications, such as database organization and recommender systems. We are interested in the case where the system is trained using clean high-quality audio files, but most of the files that need to be automatically tagged during the test phase are heavily compressed and noisy, perhaps because they were captured on a mobile device. In this situation we assume the audio files follow a covariate shift model in the acoustic feature space, i.e., the feature distributions are different in the training and test phases, but the conditional distribution of labels given features remains unchanged. Our method uses a specially designed audio similarity measure as input to a set of weighted logistic regressors, which attempt to alleviate the influence of covariate shift. Results on a freely available database of sound files contributed and labeled by non-expert users, demonstrate effective automatic tagging performance.
Gordon Wichern, Makoto Yamada, Harvey D. Thornburg