Sciweavers

JOCN
2010

Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

13 years 10 months ago
Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model
■ Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult in the nonspeech domain as compared to the speech domain. We constructed a biophysically realistic neural network model simulating this experimental evidence. We propose that a stronger connection between modalities in speech underlies the behavioral difference between the speech and the nonspeech domain. This could be the result of more extensive experience with speech stimuli. Because the match-to-sample paradigm does not allow us to draw conclusions concerning the integration of auditory and visual information, we also simulated two further conditions based on the same paradigm, which tested the integration of auditory and visual infor...
Marco Loh, Gabriele Schmid, Gustavo Deco, Wolfram
Added 28 Jan 2011
Updated 28 Jan 2011
Type Journal
Year 2010
Where JOCN
Authors Marco Loh, Gabriele Schmid, Gustavo Deco, Wolfram Ziegler
Comments (0)