This paper describes an algorithm that performs a simple form of computational auditory scene analysis to separate multiple speech signals from one another on the basis of the modulation frequencies of the components. The most novel aspect of the algorithm is the use of the cross-correlation of the instantaneous frequencies of the components of a signal to identify and separate those components that are likely have been produced by a common sound source. The putative desired target speech signal is reconstructed by choosing those components that have the greatest mutual correlation, and then using extrinsic information such as fundamental frequency or speaker identification to determine which component clusters belong to which speaker. The system was evaluated by comparing speech recognition accuracy of a target speech signal that was extracted from a mixture of two speakers. It was found that recognition accuracy obtained when the separation was based on cross-correlation of changes...
Lingyun Gu, Richard M. Stern