Markov decision processes (MDPs) and contingency planning (CP) are two widely used approaches to planning under uncertainty. MDPs are attractive because the model is extremely general and because many algorithms exist for deriving optimal plans. In contrast, CP is normally performed using heuristic techniques that do not guarantee optimality, but the resulting plans are more compact and more understandable. The inability to present MDP policies in a clear, intuitive way has limited their applicability in some important domains. We examine the relationship between the two paradigms and present an anytime algorithm for deriving optimal contingency plans for an MDP. The resulting algorithm combines effectively the strengths of the two approaches.