Abstract—We discuss issues related to the design of a multitouch gesture sensing environment, allowing the user to execute both independent and coordinated gestures. We discuss different approaches, comparing frontal vs. back projection devices and gesture tracking vs. shape recognition. To compare the approaches we introduce a simple gesture language for drawing diagrams. A test multitouch device built around FTIR technology is illustrated; a vision system, driven by a visual dataflow programming environment, interprets the user gestures and classifies them into a set of predefined patterns, corresponding to language commands.