Policy evaluation is a critical step in the approximate solution of large Markov decision processes (MDPs), typically requiring O(|S|3 ) to directly solve the Bellman system of |S| linear equations (where |S| is the state space size). In this paper we apply a recently introduced multiscale framework for analysis on graphs to design a faster algorithm for policy evaluation. For a fixed policy , this framework efficiently constructs a multiscale decomposition of the random walk P associated with the policy . This enables efficiently computing medium and long term state distributions, approximation of value functions, and the direct computation of the potential operator (I - P )-1 needed to solve Bellman's equation. We show that even a preliminary non-optimized version of the solver competes with highly optimized iterative techniques, and can be computed in time O(|S| log2 |S|).