To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of computational resources basis functions many researchers are investigating ways to adapt the basis functions during the learning process so that they better t the value-function landscape. Here we introduce temporal neighborhoods as small groups of states that experience frequent intragroup transitions during on-line sampling. We then form basis functions along these temporal neighborhoods. Empirical evidence is provided which demonstrates the e ectiveness of this scheme. We discuss a class of RL problems for which this method might be plausible. 1 Overview In reinforcement learning an agent navigates an environment a state space by selecting various actions in each state. As the agent makes actions, it receives rewards indicating the goodness" of the action. Reinforcement learning is a methodology which...
R. Matthew Kretchmar, Charles W. Anderson