Computer vision systems for human-computer interaction have tended towards more precise forms of interface that require complex vision tasks such as segmentation, tracking, object recognition, pose estimation, and gesture recognition. We present an alternate approach that extrapolates a method for en masse audience interaction through video. The en masse interaction simulates a particle moving in the field of motion created by the audience, and the audience interacts by manipulating the particle position. In this paper, we show that by adding sets of constraints to the particle motion, one can build GUI-style widgets. We describe several of these widgets and the results of a small-sample pilot study to test them. The results are not conclusive, but are encouraging, suggesting possibilities for video games and interactive theatre.
Jeffrey E. Boyd