The Visual Acts theory aims to provide intelligent assistance for camera viewpoint selection during teleoperation. It combines top-down partitioning of a task and bottom-up monitoring of the operator to select task-relevant camera viewpoints. Previous experimental studies have shown that Visual Acts provides camera views of sufficient quality to allow an operator to complete a task. In cases where the camera system is complex and difficult to master, it selects better viewpoints than the operator. In this paper we present an alternative architecture incorporating a viewpoint selection algorithm that places emphasis on what the operator should do next, rather than on what he is currently doing. Experimental results are presented showing that this simpler algorithm performs as well as the more pedantic Visual Acts algorithm, and raises greater awareness of the operator to 3D information. The results contribute to a better understanding of humanrobot interaction in telerobotic scenarios....
Gerard T. McKee, Bernard G. Brooks, Paul S. Schenk