We present generative models dedicated to face recognition. Our models consider data extracted from color face images and use Bayesian Networks to model relationships between different observations derived from a single face. Specifically, the use of color as a complementary observation to local, grayscale-based features is investigated. This is done by means of new generative models, combining color and grayscale information in a principled way. Color is either incorporated at the global face level, at the local facial feature level, or at both levels. Experiments on the face authentication task are conducted on two benchmark databases, XM2VTS and BANCA. Obtained results show that integrating color in an intelligent manner improves the performance over a similar baseline system acting on grayscale only, but also over an Eigenfaces-based system were information from different color channels are treated independently.