In this paper an operational model for the automatic generation of lifelike gestures of an anthropomorphic virtual agent is described. The biologically motivated approach to controlling the movements of a highly articulated figure provides a transformation of spatiotemporal gesture specifications into an analog representation of the movement from which the animations are directly rendered. To this end, knowledge-based computer animation techniques are combined with appropriate methods for trajectory formation and articulated figure animation.