We propose a novel approach for determining if a pair of images match each other under the effect of a highdistortion transformation or non-structural relation. The co-occurrence statistics between features across a pair of images are learned from a training set comprising matched and mismatched image pairs – these are expresssed in the form of a cross-feature ratio table. The proposed method does not require feature-to-feature correspondences, but instead identifies and exploits feature co-occurrences that are able to provide discriminative result from the transformation. The method not only allows for the matching of test image pairs that have substantially different visual content as compared to those present in the training set, but also caters for transformations and relations that do not preserve image structure.