A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work introduces the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of a mixture of dynamic textures, and the model is related to previous works in linear systems, machine learning, time-series clustering, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes (e.g. fire, steam, water, clouds, trees, vehicle and pedestrian traffic, etc.) that have traditionally been challenging for computer vision. When compared with traditional representations based on optical flow or other localized m...
Antoni B. Chan, Nuno Vasconcelos