In field environments it is not usually possible to provide robotic systems with valid geometric models of the task and environment. The robot or robot teams will need to create these models by performing appropriate sensor actions. Here, an algorithm based on iterative sensor planning and sensor redundancy is proposed to enable them to efficiently build 3D models of the environment and task. The method assumes stationary robotic vehicles with cameras carried by articulated mounts. The algorithm uses the measured scene information to find new camera mount poses based on information content. Issues addressed include model-based multiple sensor data fusion, and uncertainty and vehicle suspension motion compensation. Simulations show the effectiveness of this algorithm.
Vivek A. Sujan, Steven Dubowsky