Sciweavers

ICVS
2009
Springer

Integration of Visual Cues for Robotic Grasping

14 years 6 months ago
Integration of Visual Cues for Robotic Grasping
In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.
Niklas Bergström, Jeannette Bohg, Danica Krag
Added 26 May 2010
Updated 26 May 2010
Type Conference
Year 2009
Where ICVS
Authors Niklas Bergström, Jeannette Bohg, Danica Kragic
Comments (0)