Sciweavers

ALT
2006
Springer

General Discounting Versus Average Reward

14 years 9 months ago
General Discounting Versus Average Reward
Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to ∞ (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m → ∞ and V for k → ∞ are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then the existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists. Contents
Marcus Hutter
Added 14 Mar 2010
Updated 14 Mar 2010
Type Conference
Year 2006
Where ALT
Authors Marcus Hutter
Comments (0)