To support real-time tracking of objects in video sequences, there has been considerable effort directed at developing optical flow and general motion-based image segmentation algorithms. The goal is to segment multiple moving objects in the image based on their relative motion. This task can be complicated by the presence of lighting variations. Furthermore, a combination of multiple motions and complex lighting effects can lead to dramatic image variations that may not be adequately accounted for by a single motion-based segmentation algorithm. We propose to fuse the results of multiple motion segmentation algorithms to improve the system robustness. Our approach uses the Expectation Maximization (EM) algorithm as a fusion engine. It also uses Principal Components Analysis (PCA) to perform dimensionality reduction to improve the performance of the EM algorithm and reduce the processing burden. The performance of the proposed fusion algorithm has been demonstrated in the "smart ...
Michael E. Farmer, Xiaoguang Lu, Hong Chen, Anil K