We present an approach for extracting coherently sampled animated meshes from input sequences of incoherently sampled meshes representing a continuously evolving shape. Our approach is based on multiscale adaptive motion estimation procedure followed by propagation of a template mesh through time. An adaptive signed distance volumes are used as the principal shape representation, and a Bayesian optical flow algorithm is adapted to the surface setting with a modification that diminishes the interference between unrelated surface regions. Additionally, a parametric smoothing step is employed to improve the sampling coherence of the model. The result of the proposed procedure is a single animated mesh. We apply our approach to the human motion data.