Sciweavers

104 search results - page 5 / 21
» Learning sound location from a single microphone
Sort
View
IPSN
2011
Springer
12 years 10 months ago
Localising speech, footsteps and other sounds using resource-constrained devices
While a number of acoustic localisation systems have been proposed over the last few decades, these have typically either relied on expensive dedicated microphone arrays and works...
Yukang Guo, Mike Hazas
ICASSP
2010
IEEE
13 years 7 months ago
HMM-based separation of acoustic transfer function for single-channel sound source localization
This paper presents a sound source (talker) localization method using only a single microphone, where a HMM (Hidden Markov Model) of clean speech is introduced to estimate the aco...
Ryoichi Takashima, Tetsuya Takiguchi, Yasuo Ariki
ICMI
2003
Springer
93views Biometrics» more  ICMI 2003»
14 years 15 days ago
A multi-modal approach for determining speaker location and focus
This paper presents a multi-modal approach to locate a speaker in a scene and determine to whom he or she is speaking. We present a simple probabilistic framework that combines mu...
Michael Siracusa, Louis-Philippe Morency, Kevin Wi...
COGSCI
2002
99views more  COGSCI 2002»
13 years 7 months ago
Learning words from sights and sounds: a computational model
This paper presents an implemented computational model of word acquisition which learns directly from raw multimodal sensory input. Set in an information theoretic framework, the ...
Deb Roy, Alex Pentland
CLEAR
2006
Springer
116views Biometrics» more  CLEAR 2006»
13 years 11 months ago
Multi-and Single View Multiperson Tracking for Smart Room Environments
Abstract. Simultaneous tracking of multiple persons in real world environments is an active research field and several approaches have been proposed, based on a variety of features...
Keni Bernardin, Tobias Gehrig, Rainer Stiefelhagen