Images are increasingly being embedded in HTML documents on the WWW. Such documents over the WWW essentially provides a rich source of image collection from which users can query. Interestingly, the semantics of these images are typically described by their surrounding text. Unfortunately, most WWW image search engines fail to exploit these image semantics and give rise to poor recall and precision performance. In this paper, we propose a novel image representation model called Weight ChainNet. Weight ChainNet is based on lexical chain that represents the semantics of an image from its nearby text. A new formula, called list space model, for computing semantic similarities is also introduced. To further improve the retrieval effectiveness, we also propose two relevance feedback mechanisms. We conducted an extensive performance study on a collection of 5000 images obtained from documents identified by more than 2000 URLs. Our results show that our models and methods outperform existing...