We describe a system which is designed to assist in extracting high-level information from sets or sequences of images. We show that the method of principal components analysis followed by a neural network learning phase is capable of feature extraction or motion tracking, even through occlusion. Given a minimal amount of user direction for the learning phase, a wide range of features can be automatically extracted. Features discussed in this paper include information associated with human head motions and a birds wings during take off. We have quantified the results, for instance showing that with only 25 out of 424 frames of hand labelled information a system to track a persons nose can be trained almost as accurately as a human attempting the same task. We demonstrate a system that is powerful, flexible and, above all, easy for non-specialists to use.
David P. Gibson, Neill W. Campbell, Colin J. Dalto