A dynamic texture is a generative model for video that
treats the video as a sample from spatio-temporal stochastic
process. One problem associated with the dynamic texture
is that it cannot model video where there are regions
of motion with different dynamics, e.g. a scene with smoke
and fire. In this work, we introduce the layered dynamic texture
model, which addresses this problem by introducing a
separate state process for each region of motion. We derive
the EM algorithm for learning the parameters of the model,
and demonstrate the efficacy of the proposed model for the
tasks of segmentation and synthesis of video.
Antoni B. Chan, Nuno Vasconcelos