This paper presents the complete integrated planning, executing and learning robotic agent Rogue. We describe Rogue's task planner that interleaves high-level task planning with real world robot execution. It supports multiple, asynchronous goals, suspends and interrupts tasks, and monitors and compensates for failure. We present a general approach for learning situation-dependent rules from execution, which correlates environmental features with learning opportunities, thereby detecting patterns and allowing planners to predict and avoid failures. We present two implementations of the general learning approach, in the robot's path planner, and in the task planner. We present empirical data to show the e ectiveness of Rogue's novel learning approach.
Karen Zita Haigh, Manuela M. Veloso