Sciweavers

316 search results - page 9 / 64
» The Catchment Feature Model for Multimodal Language Analysis
Sort
View
PAMI
2002
98views more  PAMI 2002»
13 years 7 months ago
Extraction of Visual Features for Lipreading
The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information...
Iain Matthews, Timothy F. Cootes, J. Andrew Bangha...
WWW
2002
ACM
14 years 8 months ago
OCTOPUS: aggressive search of multi-modality data using multifaceted knowledge base
An important trend in Web information processing is the support of multimedia retrieval. However, the most prevailing paradigm for multimedia retrieval, content-based retrieval (C...
Jun Yang 0003, Qing Li, Yueting Zhuang
ECIR
2011
Springer
12 years 11 months ago
Fractional Similarity: Cross-Lingual Feature Selection for Search
Abstract. Training data as well as supplementary data such as usagebased click behavior may abound in one search market (i.e., a particular region, domain, or language) and be much...
Jagadeesh Jagarlamudi, Paul N. Bennett
FGR
2004
IEEE
216views Biometrics» more  FGR 2004»
13 years 11 months ago
Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed ...
Jeffrey F. Cohn, Lawrence Ian Reed, Tsuyoshi Moriy...
AAAI
2008
13 years 9 months ago
Multimodal People Detection and Tracking in Crowded Scenes
This paper presents a novel people detection and tracking method based on a multi-modal sensor fusion approach that utilizes 2D laser range and camera data. The data points in the...
Luciano Spinello, Rudolph Triebel, Roland Siegwart