Sciweavers

207 search results - page 22 / 42
» Context based multimodal fusion
Sort
View
ACL
2006
13 years 9 months ago
An Unsupervised Morpheme-Based HMM for Hebrew Morphological Disambiguation
Morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text. When the word is ambiguous (there are several possibl...
Meni Adler, Michael Elhadad
ICIP
2006
IEEE
14 years 1 months ago
Extracting Static Hand Gestures in Dynamic Context
Cued Speech is a specific visual coding that complements oral language lip-reading, by adding static hand gestures (a static gesture can be presented on a single photograph as it ...
Thomas Burger, Alexandre Benoit, Alice Caplier
GCB
2009
Springer
193views Biometrics» more  GCB 2009»
13 years 11 months ago
Integration and Visualisation of Multimodal Biological Data
: Understanding complex biological systems requires data from manifold biological levels. Often this data is analysed in some meaningful context, for example, by integrating it int...
Hendrik Rohn, Christian Klukas, Falk Schreiber
CORR
2011
Springer
173views Education» more  CORR 2011»
13 years 2 months ago
Probability Based Clustering for Document and User Properties
Information Retrieval systems can be improved by exploiting context information such as user and document features. This article presents a model based on overlapping probabilistic...
Thomas Mandl, Christa Womser-Hacker
ECIR
2008
Springer
13 years 9 months ago
Semantic Relationships in Multi-modal Graphs for Automatic Image Annotation
It is important to integrate contextual information in order to improve the inaccurate results of current approaches for automatic image annotation. Graph based representations all...
Vassilios Stathopoulos, Jana Urban, Joemon M. Jose