We address performance issues associated with simulationbased algorithms for optimizing Markov reward processes. Specifically, we are concerned with algorithms that exploit the regenerative structure of the process in estimating the gradient of the objective function with the respect to control parameters. In many applications, states which initially have short expected return-times may eventually become infrequently visited as the control parameters are updated. As a result, unbiased updates to the control parameters can become so infrequent as to render the algorithm impractical. The performance of these algorithms can be significantly improved by adapting the state which is used to mark regenerative cycles. In this paper, we introduce such an adaptation procedure, give initial arguments for its convergence properties, and illustrate its application in two numerical examples. The examples relate to the optimal pricing of communication network resources for congestion-controlled traf...
Enrique Campos-Náñez, Stephen D. Pat