Most tracking algorithms detect moving objects by comparing incoming images against a reference frame. Crucially, this reference image must adapt continuously to the current lighting conditions if objects are to be accurately differentiated. In this work, a novel appearance model method is presented based on the eigen-background approach[1]. The image can be efficiently represented by a set of appearance models with few significant dimensions. Rather than accumulating the necessarily enormous training set to generate the eigen model, the described technique builds and adapts the eigen-model online evolving both the parameters and number of significant dimension. For each incoming image, a reference frame may be efficiently hypothesized from a subsample of the incoming pixels. A comparative evaluation that measures segmentation accuracy using large amounts of manually derived ground truth is presented.
Jonathan D. Rymel, John-Paul Renno, Darrel Greenhi