In this paper, we study how to automatically exploit visual concepts in a text-based image retrieval task. First, we use Forest of Fuzzy Decision Trees (FFDTs) to automatically annotate images with visual concepts. Second, using optionally WordNet, we match visual concepts and textual query. Finally, we filter the textbased image retrieval result list using the FFDTs. This study is performed in the context of two tasks of the CLEF2008 international campaign: the Visual Concept Detection Task (VCDT) (17 visual concepts) and the photographic retrieval task (ImageCLEFphoto) (39 queries and 20k images). Our best VCDT run is the 4th best of the 53 submitted runs. The ImageCLEFphoto results show that there is a clear improvement, in terms of precision at 20, when using the visual concepts explicitly appearing in the query.