Sciweavers

3115 search results - page 114 / 623
» Interactive Speech Understanding
Sort
View
ANLP
2000
109views more  ANLP 2000»
15 years 5 months ago
Predicting Automatic Speech Recognition Performance Using Prosodic Cues
In spoken dialogue systems, it is important for a system to know how likely a speech recognition hypothesis is to be correct, so it can reprompt for fresh input, or, in cases wher...
Diane J. Litman, Julia Hirschberg, Marc Swerts
ICMCS
2005
IEEE
129views Multimedia» more  ICMCS 2005»
15 years 10 months ago
Feature Selection and Stacking for Robust Discrimination of Speech, Monophonic Singing, and Polyphonic Music
In this work we strive to find an optimal set of acoustic features for the discrimination of speech, monophonic singing, and polyphonic music to robustly segment acoustic media st...
Björn Schuller, Brüning J. B. Schmitt, D...
JCP
2008
112views more  JCP 2008»
15 years 4 months ago
Speech Displaces the Graphical Crowd
Developers of visual Interface Design Environments (IDEs), like Microsoft Visual Studio and Java NetBeans, are competing in producing pretty crowded graphical interfaces in order t...
Mohammad M. Alsuraihi, Dimitris I. Rigas
ASSETS
2006
ACM
15 years 10 months ago
Non-speech input and speech recognition for real-time control of computer games
This paper reports a comparison of user performance (time and accuracy) when controlling a popular arcade game of Tetris using speech recognition or non-speech (humming) input tec...
Adam J. Sporka, Sri Hastuti Kurniawan, Murni Mahmu...
ICMCS
2005
IEEE
116views Multimedia» more  ICMCS 2005»
15 years 10 months ago
Speaker Independent Speech Emotion Recognition by Ensemble Classification
Emotion recognition grows to an important factor in future media retrieval and man machine interfaces. However, even human deciders often experience problems realizing one’s emo...
Björn Schuller, Stephan Reiter, Ronald Mü...