Sciweavers

AAAI
2015

Solving Uncertain MDPs with Objectives that Are Separable over Instantiations of Model Uncertainty

8 years 8 months ago
Solving Uncertain MDPs with Objectives that Are Separable over Instantiations of Model Uncertainty
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However, due to unavoidable uncertainty over models, it is difficult to obtain an exact specification of an MDP. We are interested in solving MDPs, where transition and reward functions are not exactly specified. Existing research has primarily focussed on computing infinite horizon stationary policies when optimizing robustness, regret and percentile based objectives. We focus specifically on finite horizon problems with a special emphasis on objectives that are separable over individual instantiations of model uncertainty (i.e., objectives that can be expressed as a sum over instantiations of model uncertainty): (a) First, we identify two separable objectives for uncertain MDPs: Average Value Maximization (AVM) and Confidence Probability Maximisation (CPM). (b) Second, we provide optimization based solutions to compute policies for uncertain MDPs with such objectives. In particular, we...
Yossiri Adulyasak, Pradeep Varakantham, Asrar Ahme
Added 27 Mar 2016
Updated 27 Mar 2016
Type Journal
Year 2015
Where AAAI
Authors Yossiri Adulyasak, Pradeep Varakantham, Asrar Ahmed, Patrick Jaillet
Comments (0)