— Gesture interfaces have long been pursued in the context of portable computing and immersive environments. However, such interfaces have been difficult to build, in part due to a lack of frameworks for their design and implementation. This paper presents a framework for automatically producing a gesture interface based on a simple interface description. Rather than defining hand poses in a lowlevel high-dimensional joint angle space, we describe and recognize gestures in a “lexical” space, in which each hand pose is decomposed into elements in a finger-pose alphabet. The alphabet and underlying rules are defined as a gesture notating system called GeLex. By implementing a generic hand pose recognition algorithm, and a mechanism to adapt it to a specific application based on an interface description, developing a gesture interface becomes straightforward.