Sciweavers

IROS
2007
IEEE

Learning full-body motions from monocular vision: dynamic imitation in a humanoid robot

14 years 5 months ago
Learning full-body motions from monocular vision: dynamic imitation in a humanoid robot
— In an effort to ease the burden of programming motor commands for humanoid robots, a computer vision technique is developed for converting a monocular video sequence of human poses into stabilized robot motor commands for a humanoid robot. The human teacher wears a multi-colored body suit while performing a desired set of actions. Leveraging the colors of the body suit, the system detects the most probable locations of the different body parts and joints in the image. Then, by exploiting the known dimensions of the body suit, a user specified number of candidate 3D poses are generated for each frame. Using human to robot joint correspondences, the estimated 3D poses for each frame are then mapped to corresponding robot motor commands. An initial set of kinematically valid motor commands is generated using an approximate best path search through the pose candidates for each frame. Finally a learning-based probabilistic dynamic balance model obtains a dynamically stable imitative se...
Jeffrey B. Cole, David B. Grimes, Rajesh P. N. Rao
Added 03 Jun 2010
Updated 03 Jun 2010
Type Conference
Year 2007
Where IROS
Authors Jeffrey B. Cole, David B. Grimes, Rajesh P. N. Rao
Comments (0)