We present a feature matching algorithm that leverages bottom-up segmentation. Unlike conventional image-toimage or region-to-region matching algorithms, our method finds corresponding points in an “asymmetric” manner, matching features within each region of a segmented image to a second unsegmented image. We develop a dynamic programming solution to efficiently identify corresponding points for each region, so as to maximize both geometric consistency and appearance similarity. The final matching score between two images is determined by the union of corresponding points obtained from each regionto-image match. Our encoding for the geometric constraints makes the algorithm flexible when matching objects exhibiting non-rigid deformations or intra-class appearance variation. We demonstrate our image matching approach applied to object category recognition, and show on the Caltech-256 and 101 datasets that it outperforms existing image matching measures by 10∼20% in nearestnei...