This work presents a novel methodology for the transformation of facial expressions, to assist face biometrics. It is known that identification using only one image per subject poses a great challenge to recognizers. This is because drastic facial expressions introduce variability, on which the recognizer is not trained. The proposed framework uses only one image per subject to predict intra-class variability, by synthesizing new expressions, which are subsequently used to train the discriminant. The expression of the gallery is transformed using the bivariate empirical mode decomposition (BEMD), which allows for simultaneous analysis of the probe image and a targeted expression mask. We advocate that 2D BEMD is a powerful tool for multi-resolution face analysis. The performance of the proposed framework, tested over a database of 96 individuals, is 90% for an FAR of 1%.