We introduce relational temporal difference learning as an effective approach to solving multi-agent Markov decision problems with large state spaces. Our algorithm uses temporal difference reinforcement to learn a distributed value function represented over a conceptual hierarchy of relational predicates. We present experiments using two domains from the General Game Playing repository, in which we observe that our system achieves higher learning rates than nonrelational methods. We also discuss related work and directions for future research.
Nima Asgharbeygi, David J. Stracuzzi, Pat Langley