This paper presents a vision-based tracking system suitable for autonomous robot vehicle guidance. The system includes a head with three on-board CCD cameras, which can be mounted anywhere on a mobile vehicle. By processing consecutive trinocular sets of precisely aligned and recti ed images, the local 3D trajectory of the vehicle in an unstructured environment can be tracked. First, a 3D representation of stable features in the image scene is generated using a stereo algorithm. Second, motion is estimated by tracking matched features over time. The motion equation with 6-DOF is then solved using an iterative least squares t algorithm. Finally, a Kalman lter implementation is used to optimize the world representation of scene features.
Parvaneh Saeedi, Peter D. Lawrence, David G. Lowe