Many of the available image databases have keyword annotations associated with the images. In spite of the availability of good quality low-level visual features that reflect well the physical content, image retrieval based on visual features alone is subject to semantic gap. Text annotations are related to image context or semantic interpretation of the visual content and are not necessarely directly linked to the visual appearance of the images. Keywords and visual features thus provide complementary information. Using both sources of information is an advantage in many applications and recent work in this area reflects this interest. In this paper, we address the challenge of semantic gap reduction using a hybrid visual and conceptual representation of the content within an active relevance feedback context. We introduce a new feature vector, based on the keyword annotations available for the images, which makes use of conceptual information extracted from an external lexical dat...