Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and pointing out gestures on a touchscreen. However, present speech recognizers and natural language interpreters cannot yet process spontaneous speech accurately. These limitations make it necessary to impose constraints on users’ speech inputs. Thus, ergonomic studies are needed to provide user interface designers with efficient guidelines for the definition of usable speech constraints. We have evolved a method for the design of expression constraints that define tractable and usable multimodal command languages; that is, languages which are both interpretable reliably by present systems, and easy to learn mostly through human-computer interaction. We present an empirical study which attempts to assess the usability of such a command language in a realistic multimodal software environment. A comparison between the initial behaviors of the subjects involved in this s...