Sciweavers

ICCBR
2010
Springer

Reducing the Memory Footprint of Temporal Difference Learning over Finitely Many States by Using Case-Based Generalization

14 years 3 months ago
Reducing the Memory Footprint of Temporal Difference Learning over Finitely Many States by Using Case-Based Generalization
In this paper we present an approach for reducing the memory footprint requirement of temporal difference methods in which the set of states is finite. We use case-based generalization to group the states visited during the reinforcement learning process. We follow a lazy learning approach; cases are grouped in the order in which they are visited. Any new state visited is assigned to an existing entry in the Q-table provided that a similar state has been visited before. Otherwise a new entry is added to the Q-table. We performed experiments on a turn-based game where actions have non-deterministic effects and might have long term repercussions on the outcome of the game. The main conclusion from our experiments is that by using case-based generalization, the size of the Q-table can be substantially reduced while maintaining the quality of the RL estimates.
Matt Dilts, Héctor Muñoz-Avila
Added 15 Aug 2010
Updated 15 Aug 2010
Type Conference
Year 2010
Where ICCBR
Authors Matt Dilts, Héctor Muñoz-Avila
Comments (0)