Emotion recognition has become a popular area in human-robot interaction research. Through recognizing facial expressions, a robot can interact with a person in a more friendly manner. In this paper, we proposed a bimodal emotion recognition system by combining image and speech signals. A novel probabilistic strategy has been studied for a support vector machine (SVM)-based classification design to assign statistically information-fusion weights for two feature modalities. The fusion weights are determined by the distance between test data and the classification hyperplane and the standard deviation of training samples. In the latter bimodal SVM classification, the recognition result with higher weight is selected. The complete procedure has been implemented in a DSP-based embedded system to recognize five facial expressions on-line in real time. The experimental results show that an average recognition rate of 86.9% is achieved, a 5% improvement compared to using only image informatio...