Sciweavers

137 search results - page 2 / 28
» On the Use of NonVerbal Speech Sounds in Human Communication
Sort
View
COGSCI
2002
99views more  COGSCI 2002»
13 years 7 months ago
Learning words from sights and sounds: a computational model
This paper presents an implemented computational model of word acquisition which learns directly from raw multimodal sensory input. Set in an information theoretic framework, the ...
Deb Roy, Alex Pentland
IROS
2008
IEEE
161views Robotics» more  IROS 2008»
14 years 2 months ago
Segmenting acoustic signal with articulatory movement using Recurrent Neural Network for phoneme acquisition
— This paper proposes a computational model for phoneme acquisition by infants. Human infants perceive speech sounds not as discrete phoneme sequences but as continuous acoustic ...
Hisashi Kanda, Tetsuya Ogata, Kazunori Komatani, H...
ICASSP
2011
IEEE
12 years 11 months ago
Binaural sound source separation motivated by auditory processing
In this paper we present a new method of signal processing for robust speech recognition using two microphones. The method, loosely based on the human binaural hearing system, con...
Chanwoo Kim, Kshitiz Kumar, Richard M. Stern
ICASSP
2011
IEEE
12 years 11 months ago
Analysis of phone confusion in EMG-based speech recognition
In this paper we present a study on phone confusabilities based on phone recognition experiments from facial surface electromyographic (EMG) signals. In our study EMG captures the...
Michael Wand, Tanja Schultz
ICCHP
1994
Springer
13 years 11 months ago
Synthesizing Non-Speech Sound to Support Blind and Visually Impaired Computer Users
This paper describes work in progress on automatic generation of "impact sounds" based on physical modelling. These sounds can be used as non-speech audio presentation of...
Alireza Darvishi, Valentin Guggiana, Eugen Muntean...