We present a new approach to robust pose-variant face
recognition, which exhibits excellent generalization ability
even across completely different datasets due to its weak
dependence on data. Most face recognition algorithms assume
that the face images are very well-aligned. This assumption
is often violated in real-life face recognition tasks,
in which face detection and rectification have to be performed
automatically prior to recognition. Although great
improvements have been made in face alignment recently,
significant pose variations may still occur in the aligned
faces. We propose a multiscale local descriptor-based face
representation to mitigate this issue. First, discriminative
local image descriptors are extracted from a dense set of
multiscale image patches. The descriptors are expanded by
their spatial locations. Each expanded descriptor is quantized
by a set of random projection trees. The final face representation
is a histogram of the quantized descripto...