In this paper we describe the initial results of an investigation into the relationship between Markov Decision Processes (MDPs) and Belief-Desire-Intention (BDI) architectures. While these approaches look rather different, and have at times been seen as alternatives, we show that they can be related to one another quite easily. In particular, we show how to map intentions in the BDI architecture to policies in an MDP and vice-versa. In both cases, we derive both theoretical and related algorithmic mappings. While the mappings that we obtain are of theoretical rather than practical value, we describe how they can be extended to provide mappings that are useful in practice. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Intelligent Agents. General Terms Theory, Design Keywords Markov Decision Process, Policy, Intention.
Gerardo I. Simari, Simon Parsons