Most methods for visual control of robots formulate the robot command in joint or Cartesian space. To move the robot these commands are remapped to motor torques usually requiring a dynamic model of the robot. In this paper we present a method for parameterizing joint torques and learning to map visual input directly to them. The system is implemented and used to control a CRS 465 robot. The results of the implementation demonstrate that the parameterization of the torques allows both the motion and position of the robot's end effectors to be controlled. Moreover, it is shown that it is possible to map visual input directly to joint torques.
Jeremiah J. Neubert, Nicola J. Ferrier