Sciweavers

ICML
2003
IEEE

The Significance of Temporal-Difference Learning in Self-Play Training TD-Rummy versus EVO-rummy

14 years 4 months ago
The Significance of Temporal-Difference Learning in Self-Play Training TD-Rummy versus EVO-rummy
Reinforcement learning has been used for training game playing agents. The value function for a complex game must be approximated with a continuous function because the number of states becomes too large to enumerate. Temporal-difference learning with self-play is one method successfully used to derive the value approximation function. Coevolution of the value function is also claimed to yield good results. This paper reports on a direct comparison between an agent trained to play gin rummy using temporal difference learning, and the same agent trained with co-evolution. Coevolution produced superior results.
Clifford Kotnik, Jugal K. Kalita
Added 05 Jul 2010
Updated 05 Jul 2010
Type Conference
Year 2003
Where ICML
Authors Clifford Kotnik, Jugal K. Kalita
Comments (0)