Sciweavers

ICMCS
2005
IEEE

Multi-sensory speech processing: incorporating automatically extracted hidden dynamic information

14 years 5 months ago
Multi-sensory speech processing: incorporating automatically extracted hidden dynamic information
We describe a novel technique for multi-sensory speech processing for enhancing noisy speech and for improved noiserobust speech recognition. Both air- and bone-conductive microphones are used to capture speech data where the bone sensor contains virtually noise-free hidden dynamic information of clean speech in the form of formant trajectories. The distortion in the bone-sensor signal such as teethclacking and noise leakage can be effectively removed by making use of the automatically extracted formant information from the bone-sensor signal. This paper reports an improved technique for synthesizing speech waveforms based on the LPC cepstra computed analytically from the formant trajectories. When this new signal stream is fused with the other available speech data streams, we achieved improved performance for noisy speech recognition.
Amarnag Subramanya, Li Deng, Zicheng Liu, Zhengyou
Added 24 Jun 2010
Updated 24 Jun 2010
Type Conference
Year 2005
Where ICMCS
Authors Amarnag Subramanya, Li Deng, Zicheng Liu, Zhengyou Zhang
Comments (0)