A perennial challenge in creating and using complex autonomous agents is following their choices of actions as the world changes dynamically and understanding why they act as they do. This paper reports on our work to support human developers and observers to better follow and understand the actions of autonomous agents. We introduce the concept of layered disclosure by which autonomous agents have included in their architecture the foundations necessary to allow them to disclose upon request the specific reasons for their actions. Layered disclosure hence goes beyond standard plain code debugging tools. In its essence it also gives the agent designer the ability to define an appropriate information hierarchy, which can include agent-specific constructs such as internal state that persists over time. The user may request this information at any of the specified levels of detail, and either retroactively or while the agent is acting. We present layered disclosure as we created and i...
Patrick Riley, Peter Stone, Manuela M. Veloso