Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a convenient and efficient way. Representations built on logic allow for problems to be specified in a compact and transparent manner. Moreover, decision making algorithms can assume and exploit structure found in the state space, actions, observations, and success criteria, and can solve with relative efficiency problems with large state spaces. In recent years researchers have sought to combine the benefits of logic with the expressiveness of POMDPs. In this paper, we show how to build upon and extend the results in this fusing of logic and decision theory. In particular, we present a compact representation of POMDPs and a method to update beliefs after actions and observations. The key contribution is our compact representation of belief states and of the operations used to update them. We then use heuristi...
Chenggang Wang, James G. Schmolze