In this paper, we propose an automatic learning method for gesture recognition. We combine two different pattern recognition techniques: the SelfOrganizing Map (SOM) and Support Ve...
Vision-based user interfaces enable natural interaction modalities such as gestures. Such interfaces require computationally intensive video processing at low latency. We demonstr...
Ming-yu Chen, Lily B. Mummert, Padmanabhan Pillai,...
In this paper a gesture recognition system using 3D data is described. The system relies on a novel 3D sensor that generates a dense range image of the scene. The main novelty of ...
Sotiris Malassiotis, Niki Aifanti, Michael G. Stri...
This paper presents an approach for view-based recognition of gestures. The approach is based on representing each gesture as a sequence of learned body poses. The gestures are re...
Ahmed M. Elgammal, Vhay Shet, Yaser Yacoob, Larry ...
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...