We present an approach for tracking a lecturer during the course of his speech. We use features from multiple cameras and microphones, and process them in a joint particle filter f...
Kai Nickel, Tobias Gehrig, Hazim Kemal Ekenel, Joh...
In this paper, we present an approach for speaker change detection in broadcast video using joint audio-visual scene change statistics. Our experiments indicate that using joint a...
People can understand complex auditory and visual information, often using one to disambiguate the other. Automated analysis, even at a lowlevel, faces severe challenges, includin...
John W. Fisher III, Trevor Darrell, William T. Fre...
Speaker change detection is most commonly done by statistically determining whether the two adjacent segments of a speech stream are significantly different or not. In this paper, ...
We investigate the challenging issue of joint audio-visual analysis of generic videos targeting at semantic concept detection. We propose to extract a novel representation, the Sh...
Wei Jiang, Courtenay V. Cotton, Shih-Fu Chang, Dan...