Recent work has led to the development of an elegant theory of Linearly Solvable Markov Decision Processes (LMDPs) and related Path-Integral Control Problems. Traditionally, LMDPs have been formulated using stochastic policies and a control cost based on the KL divergence. In this paper, we extend this framework to a more general class of divergences: the R´enyi divergences. These are a more general class of divergences parameterized by a continuous parameter α that include the KL divergence as a special case. The resulting control problems can be interpreted as solving a risk-sensitive version of the LMDP problem. For α > 0, we get riskaverse behavior (the degree of risk-aversion increases with α) and for α < 0, we get riskseeking behavior. We recover LMDPs in the limit as α → 0. This work generalizes the recently developed risk-sensitive path-integral control formalism which can be seen as the continuous-time limit of results obtained in this paper. To the best of our ...