Man-Machine Interaction (MMI) Systems that utilize multimodal information about users’ current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. A lifelike avatar can enhance interactive applications. In this paper, we present the implementation of GretaEngine and synthesized expressions, including intermediate ones, based on MPEG-4 standard and Whissel’s Emotion Representation.