This paper discusses theoretical and experimental aspects of gradient-based approaches to the direct optimization of policy performance in controlled ??? ?s. We introduce ??? ?, a ? ?? ?? -like algorithm for estimating an approximation to the gradient of the average reward as a function of the parameters of a stochastic policy. The algorithm's chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter ? ? ? ??, which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. We prove convergence of ??? ? and show how the gradient estimates produced by ??? ? can be used in a conjugate-gradient procedure to find local optima of the average reward.
Jonathan Baxter, Peter L. Bartlett