This article deals with facial segmentation and liptracking with feedback control for real-time animation of a synthetic 3D face model. Classical approaches consist in two successive steps : video analysis then synthesis. We want to build a global analysis/synthesis processing loop, where the image analysis needs the 3D synthesis and conversely. For that, we fit a generic 3D-face model on the speaker's face in our analysis algorithm for using synthesis information (like 3D information or face shape). This approach is inspired from control systems theory with feedback loops. The contribution of the paper is to use simple image processing techniques on available data, but to improve segmentation through the feedback loop. Moreover, we propose a robust lip corners tracking based on estimation motion algorithm. The speaker is only asked to be in front of the camera with the mouth closed at the beginning of the video session (neutral position). This allows to do a quick initialisation...