Facial movements play an important role in interpreting spoken conversations and emotions. There are several types of movements, such as conversational signals, emotion displays, etc. We call these channels of facial movement. Realistic animation of these movements would improve the realism, liveliness of the interaction between human and computers using embodied conversational agents. To date, no appropriate methods have been proposed for integrating all facial movements. We propose in this paper a scheme of combining facial movements on a 3D talking head. First, we concatenate the movements in the same channel to generate smooth transitions between adjacent movements. This combination only applies to individual muscles. The movements from all channels are then combined taking into account the resolution of possible conflicting muscles.