In this paper, we present a novel approach to classify texture collections. This approach does not require experts to provide annotated training set. Given the image collection, we extract a set of invariant descriptors from each image. The descriptors of all images are vector-quantized to form 'keypoints'. Then we represent the texture images by 'bagof-keypoints' vectors. By analogy text classification, we use Probabilistic Latent Semantic Indexing(PLSI) to perform unsupervised classification. The proposed approach is evaluated using the UIUC database which contains significant viewpoint and scale changes. The performances of classifying new images using the parameters learnt from the unannotated image collection are also presented. The experiment results clearly demonstrate that the approach is robust to scale and viewpoint changes, and achieves good classification accuracy even without annotated training set.