This note describes how continuous sensing of gesture enabling expressive control of real-time audio/visual media is achieved using Berkeley motes. We have contributed a relatively stable, inexpensive, extensible, and replicable wireless sensing platform for continuous motion tracking and placed the sensors into clothing to provide unobtrusive, natural affordances to the gesturing user. This paper includes an analysis of system requirements and a discussion of the software/hardware architecture.