Markov Decision Processes (MDP) have been widely used as a framework for planning under uncertainty. They allow to compute optimal sequences of actions in order to achieve a given goal, accounting for actuators uncertainties. But algorithms classically used to solve MDPs are intractable for problems requiring a large state space. Plans are computed considering the whole state space, without using any knowledge about the initial state of the problem. In this paper, we propose a new technique to build partial plans for a mobile robot, considering only a restricted MDP, which contains a small set of states composing a path between the initial state and the goal state. To ensure good quality of the solution, the path has to be very similar to the one which would have been computed on the whole environment. We present in this paper a new method to compute partial plans, showing that representing the environment as a directed graph can be very helpful to find near-optimal paths. Partial pla...