Background modeling and subtraction is a fundamental task in many computer vision and video processing applications. We present a novel probabilistic background modeling and subtraction method that exploits spatial and temporal dependencies between pixels. By using an initial clustering of the background scene, we model each pixel by a mixture of spatiotemporal Gaussian distributions, where each distribution represents locally a region in the neighborhood of the pixel. By extracting the local properties around each pixel, the proposed method obtains accurate models of dynamic backgrounds that are highly effective in detecting foreground objects. Experimental results for indoor and outdoor surveillance videos in comparison with other multimodal methods demonstrate the performance advantages of the proposed method.
S. Derin Babacan, Thrasyvoulos N. Pappas