— The computational understanding of continuous human movement plays a significant role in diverse emergent applications in areas ranging from human computer interaction to physical and neuro- rehabilitation. Non-visual feedback can aid the continuous motion control tasks that such applications frequently entail. An architecture is introduced for enabling interaction with a system that furnishes a number of gestural affordances with assistive feedback. The approach combines machine learning techniques for understanding a user’s gestures with a method for the display of salient features of the underlying inference process in real time. Methods used include a particle filter to track multiple hypotheses about a user’s input as the latter is unfolding, together with models of the nonlinear dynamics intrinsic to the movements of interest. Non-visual feedback in this system is based on a presentation of error features derived from an estimate of the sampled time varying probability ...
Yon Visell, Jeremy R. Cooperstock