The study of sign languages attempts to create a coherent model that binds the expressive nature of signs conveyed in gestures to a linguistic framework. Gesture modelling offers an alternative that provides device independence, scalability and flexibility for the annotation and modelling of linguistic phenomena. This paper presents the requirements and initial experiments to build an input method editor for sign languages. The objective is to design interfaces backed up by computational methods that can infer and use linguistic guidance to model sign language gestures. This in turn produces a linguistically annotated corpus of gesture animations.
Guillaume J.-L. Olivrin