Sciweavers

100 search results - page 10 / 20
» Signaling emotion in tagclouds
Sort
View
ICASSP
2010
IEEE
13 years 10 months ago
Finding emotionally involved speech using implicitly proximity-annotated laughter
Browsing through collections of audio recordings of conversation nominally relies on the processing of participants’ lexical productions. The evolving verbal and non-verbal cont...
Kornel Laskowski
COST
2008
Springer
99views Multimedia» more  COST 2008»
13 years 11 months ago
Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture
In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspon...
Goranka Zoric, Karlo Smid, Igor S. Pandzic
COST
2009
Springer
203views Multimedia» more  COST 2009»
14 years 4 months ago
Multiple Feature Extraction and Hierarchical Classifiers for Emotions Recognition
Abstract. The recognition of the emotional states of speaker is a multidisciplinary research area that has received great interest in the last years. One of the most important goal...
Enrique M. Albornoz, Diego H. Milone, Hugo Leonard...
ICASSP
2011
IEEE
13 years 1 months ago
F0 range and peak alignment across speakers and emotions
We present an analysis of F0 range and peak alignment in emotional speech from a heterogeneous group of speakers varying in age and gender. Both speaker and emotion had a strong e...
Eric Morley, Jan P. H. van Santen, Esther Klabbers...
MIR
2010
ACM
217views Multimedia» more  MIR 2010»
14 years 3 months ago
Feature selection for content-based, time-varying musical emotion regression
In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoust...
Erik M. Schmidt, Douglas Turnbull, Youngmoo E. Kim