In this paper, we describe methods for e ciently computing better solutions to control problems in continuous state spaces. We provide algorithms that exploit online search to boost the power of very approximate value functions discovered by traditional reinforcement learning techniques. We examine local searches, where the agent performs a nite-depth lookahead search, and global searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of the local methods lies in taking a value function, which gives a rough solution to the hard problem of nding good trajectories from every single state, and combining that with online search, which then gives an accurate solution to the easier problem of nding a good trajectory speci cally from the current state. The key to the success of the global methods lies in using aggressive state-space search techniques such as uniform-cost search and A , tamed into a tractable form...
Scott Davies, Andrew Y. Ng, Andrew W. Moore