In this paper we argue that gestures based on nonaccidental motion features can be reliably detected amongst unconstrained background motion. Specifically, we demonstrate that humans can perform non-accidental motions with high accuracy, and that these trajectories can be extracted from video with sufficient accuracy to reliably distinguish them from the background motion. We demonstrate this by learning Gaussian mixture models of the features associated with gesture. Non-accidental features result in compact, heavily-weighted, mixture component distributions. We demonstrate reliable detection by using the mixture models to discriminate non-accidental features from the background.