In this paper, we propose a novel framework for face super-resolution based on a layered predictor network. In the first layer, multiple predictors are trained online with a dynamicconstructed training set, which is adaptively selected in order to make the trained model tailored to the testing face. When the dynamic training set is obtained, the optimum predictor can be learned based on the Resampling-Maximum LikelihoodModel. To further enhance the robustness of prediction and the smoothness of the hallucinated image, additional layers are designed to fuse multiple predictors with the fusion rule learned from the training set. Experiments fully demonstrate the effectiveness of the framework.