This paper presents a new framework for accumulating beliefs in spoken dialogue systems. The technique is based on updating a Bayesian Network that represents the underlying state of a Partially Observable Markov Decision Process (POMDP). POMDP models provide a principled approach to handling uncertainty in dialogue but generally scale poorly with the size of the state and action space. The framework proposed, on the other hand, scales well and can be extended to handle complex dialogues. Learning is achieved with a factored summarising function that is applicable for many slot-filling type dialogues. The framework also provides a good structure from which to build hand-crafted policies. For very complex dialogues, this allows the POMDP’s principled approach to uncertainty to be incorporated without requiring computationally intensive learning algorithms. Simulations show that the proposed framework outperforms standard techniques whenever errors increase.