Although facial features are considered to be essential for humans to understand sign language, no prior research work has yet examined their significance for automatic sign language recognition or presented some evaluation results. This paper describes a vision-based recognition system that employs both manual and facial features, extracted from the same input image. For facial feature extraction an active appearance model is applied to identify areas of interest such as the eyes and mouth region. Afterwards a numerical description of facial expression and lip outline is computed. An extensive evaluation was performed on a new sign language corpus, which contains continuous articulations of 25 native signers. The obtained results proved the importance of integrating facial expressions into the classification process. The recognition rates for isolated and continuous signing increased in signer-dependent as well as in signerindependent operation mode. Interestingly, roughly two of ten...