In partial-duplicate image retrieval, images are commonly represented using Bag-of-visual-Words (BoW) built from image local features, such as SIFT. Therefore, the discriminative power of the local features is closely related with the BoW image representation and its performance in different applications. In this paper, we first propose a rotation-invariant Local Self-Similarity Descriptor (LSSD), which captures the internal geometric layouts in the local textural self-similar regions around interest points. Then we combine LSSD with SIFT to develop a multi-description of images for retrieving partial-duplicate. Finally, we formulate the Semi-Relative Entropy as the distance metric. Retrieval performance of this multi-description evaluated in the Oxford building dataset and an image corpus crawled from