Sciweavers

PRICAI
2010
Springer

Layered Hypernetwork Models for Cross-Modal Associative Text and Image Keyword Generation in Multimodal Information Retrieval

13 years 10 months ago
Layered Hypernetwork Models for Cross-Modal Associative Text and Image Keyword Generation in Multimodal Information Retrieval
Conventional methods for multimodal data retrieval use text-tag based or cross-modal approaches such as tag-image co-occurrence and canonical correlation analysis. Since there are differences of granularity in text and image features, however, approaches based on lower-order relationship between modalities may have limitations. Here, we propose a novel text and image keyword generation method by cross-modal associative learning and inference with multimodal queries. We use a modified hypernetwork model, i.e. layered hypernetworks (LHNs) which consists of the first (lower) layer and the second (upper) layer which has more than two modality-dependent hypernetworks and one modality-integrating hypernetwork, respectively. LHNs learn higher-order associative relationships between text and image modalities by training on an example set. After training, LHNs are used to extend multimodal queries by generating text and image keywords via cross-modal inference, i.e. text-toimage and image-to-te...
JungWoo Ha, Byoung-Hee Kim, Bado Lee, Byoung-Tak Z
Added 29 Jan 2011
Updated 29 Jan 2011
Type Journal
Year 2010
Where PRICAI
Authors JungWoo Ha, Byoung-Hee Kim, Bado Lee, Byoung-Tak Zhang
Comments (0)