In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoust...
Erik M. Schmidt, Douglas Turnbull, Youngmoo E. Kim
Research on affective computing is growing rapidly and new applications are being developed more frequently. They use information about the affective/mental states of users to adap...
Gaussian mixture models (GMMs) and the minimum error rate classifier (i.e. Bayesian optimal classifier) are popular and effective tools for speech emotion recognition. Typically, ...
Hao Tang, Stephen M. Chu, Mark Hasegawa-Johnson, T...
With the increasing demand for spoken language interfaces in human-computer interactions, automatic recognition of emotional states from human speeches has become of increasing im...