Affective and human-centered computing have attracted a lot of attention during the past years, mainly due to the abundance of devices and environments able to exploit multimodal i...
—This study investigates a method for estimating a driver’s spontaneous frustration in the real world. In line with a specific definition of emotion, the proposed method inte...
Lucas Malta, Chiyomi Miyajima, Norihide Kitaoka, K...
Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the lik...
We study key issues related to multilingual acoustic modeling for automatic speech recognition (ASR) through a series of large-scale ASR experiments. Our study explores shared str...
Hui Lin, Li Deng, Dong Yu, Yifan Gong, Alex Acero,...
We recently proposed a new algorithm to perform acoustic model adaptation to noisy environments called Linear Spline Interpolation (LSI). In this method, the nonlinear relationshi...
Michael L. Seltzer, Alex Acero, Kaustubh Kalgaonka...