Abstract. The work described in this abstract presents a roadmap towards the creation and specification of a virtual humanoid capable of performing expressive gestures in real time. We present a gesture motion data acquisition protocol capable of handling the main articulators involved in human expressive gesture (whole body, fingers and face). We then present the postprocessing of captured data leading to a motion database complying with our motion specification language and capable of feeding data driven animation techniques. Issues. Embodying a virtual humanoid with expressive gestures raises many problems such as computation-cost efficiency, realism and level of expressiveness, level specification of expressive gesture [1]. In this abstract, we focus on the acquisition of motion capture data from the main articulators involved in communicative gesture (whole body, face mimics and fingers motion). We then present how acquired data are postprocessed in order to build a database ...