We present an algorithm and the associated capture methodology to acquire and track the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an un-textured surface. We show how to capture such data and register it over time to generate a single deforming surface. Experiments were performed on video sequences of untextured cloth, filmed under spatially separated red, green, and blue light sources. Our first finding is that using zerodepth-silhouettes as the initial boundary condition already produces rather smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D optical flow, one can register the first frame's reconstruction to every subsequent frame.