The goal of this paper is to study the image-concept relationship as it pertains to image annotation. We demonstrate how automatic annotation of images can be implemented on partially annotated databases by learning imageconcept relationships from positive examples via inter-query learning. Latent semantic analysis (LSA), a method originally designed for text retrieval, is applied to an image/session matrix where relevance feedback examples are collected from a large number of artificial queries (sessions). Singular value decomposition (SVD) is exploited during LSA to propagate image annotations using only relevance feedback information. We will show how SVD can be used to filter a noisy image/session matrix and reconstruct missing values.