A conventional automatic speech recognizer does not perform well in the presence of multiple sound sources, while human listeners are able to segregate and recognize a signal of i...
Yang Shao, Soundararajan Srinivasan, Zhaozhang Jin...
A conventional automatic speech recognizer does not perform well in the presence of noise, while human listeners are able to segregate and recognize speech in noisy conditions. We...
Yang Shao, Zhaozhang Jin, DeLiang Wang, Soundarara...
Understanding three simultaneous speeches is proposed as a challenge problem to foster artificial intelligence, speech and sound understanding or recognition, and computational au...
Hiroshi G. Okuno, Tomohiro Nakatani, Takeshi Kawab...
Abstract--A lot of effort has been made in computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. The performance of current CASA systems on voice...
We propose a novel approach to auditory stream segregation which extracts individual sounds (auditory stream) from a mixture of sounds in auditory scene analysis. The HBSS (Harmon...
Tomohiro Nakatani, Hiroshi G. Okuno, Takeshi Kawab...