We present a new approximation method called value extrapolation for Markov processes with large or infinite state spaces. The method can be applied for calculating any performance measure that can be expressed as the expected value of a function of the system state. Traditionally, the state distribution of a system is solved in a truncated state space and then an appropriate function is summed over the states to obtain the performance measure. In our approach, the measure is obtained directly, along with the relative values of the states, by solving the Howard equations in the MDP setting. Instead of a simple state space truncation, the relative values outside the truncated state space are extrapolated using a polynomial function. The Howard equations remain linear, hence there is no significant computational penalty. The accuracy of value extrapolation, even with a heavily truncated state space, is demonstrated using processor sharing systems and data networks as examples. Categor...
Juha Leino, Jorma T. Virtamo