In this paper we develop a novel contrast-invariant appearance detection model. The goal is to classify object-specific images (e.g. face images) from generic background patches. The novel contribution of this paper is the design of a perceptual distortion measure for comparing the appearance of an object to its reconstruction from the principal subspace. We demonstrate our approach on two different datasets: separating eyes from non-eyes and classifying faces from nonfaces. On the eye database, for a true detection rate of 95% we demonstrate a nine-fold improvement in the false positive rates over a previously reported detection model [5]. We also compare our detector model with a SVM classifier.
Allan D. Jepson, Chakra Chennubhotla