The options framework provides a method for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is currently solving, they cannot be ported to other similar tasks that have different state spaces. We introduce the notion of learning options in agent-space, the portion of the agent’s sensation that is present and retains the same semantics across successive problem instances, rather than in problem-space. Agent-space options can be reused in later tasks that share the same agent-space but are sufficiently distinct to require different problem-spaces. We present experimental results that demonstrate the use of agent-space options in building reusable skills.
George Konidaris, Andrew G. Barto