Images of an object undergoing ego- or camera- motion
often appear to be scaled, rotated, and deformed versions
of each other. To detect and match such distorted patterns
to a single sample view of the object requires solving a hard
computational problem that has eluded most object matching
methods. We propose a linear formulation that simultaneously
finds feature point correspondences and global geometrical
transformations in a constrained solution space.
Further reducing the search space based on the lower convex
hull property of the formulation, our method scales well
with the number of candidate features. Our results on a variety
of images and videos demonstrate that our method is
accurate, efficient, and robust over local deformation, occlusion,
clutter, and large geometrical transformations.
Hao Jiang, Stella X. Yu