Point cloud is one of the primitive representations of 3D data nowadays. Despite that much work has been done in 2D image matching, matching 3D points achieved from different perspective or at different time remains to be a challenging problem. This paper proposes a 3D local descriptor based on 3D self-similarities. We not only extend the concept of 2D self-similarity [1] to the 3D space, but also establish the similarity measurement based on the combination of geometric and photometric information. The matching process is fully automatic i.e. needs no manually selected land marks. The results on the LiDAR and model data sets show that our method has robust performance on 3D data under various transformations and noises.