Automated analysis of facial images for the estimation of the displayed expression is essential in the design of intuitive and accessible human computer interaction systems. In existing rule-based expression recognition approaches, different feature extraction techniques have been tested that allow for the automatic detection of feature points, providing the required input for a rule based expression analysis; each one of these techniques outperforms others under specific constraints. In this paper we propose a feature extraction system which combines analysis from multiple channels based on their confidence, to result in better, error resilient facial feature boundary detection. The proposed approach has been implemented as an extension to an existing expression analysis system in the framework of the IST ERMIS project.