Sciweavers

IROS
2008
IEEE

Visual recognition of grasps for human-to-robot mapping

14 years 6 months ago
Visual recognition of grasps for human-to-robot mapping
— This paper presents a vision based method for grasp classification. It is developed as part of a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The indata consists of a single image of the human hand. A 2D representation of the hand shape, based on gradient orientation histograms, is extracted from the image. The hand shape is then classified as one of six grasps by finding similar hand shapes in a large database of grasp images. The database search is performed using Locality Sensitive Hashing (LSH), an approximate k-nearest neighbor approach. The nearest neighbors also give an estimated hand orientation with respect to the camera. The six human grasps are mapped to three Barret hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected...
Hedvig Kjellström, Javier Romero, Danica Krag
Added 31 May 2010
Updated 31 May 2010
Type Conference
Year 2008
Where IROS
Authors Hedvig Kjellström, Javier Romero, Danica Kragic
Comments (0)