We describe an effort to train a RoboCup soccer-playing agent playing in the Simulation League using casebased reasoning. The agent learns (builds a case base) by observing the behaviour of existing players and determining the spatial configuration of the objects the existing players pay attention to. The agent can then use the case base to determine what actions it should perform given similar spatial configurations. When observing a simple goal-driven, rule-based, stateless agent, the trained player appears to imitate the behaviour of the original and experimental results confirm the observed behaviour. The process requires little human intervention and can be used to train agents exhibiting diverse behaviour in an automated manner.
Michael W. Floyd, Babak Esfandiari, Kevin Lam