Partially Observable Markov Decision Process (POMDP) is a popular framework for planning under uncertainty in partially observable domains. Yet, the POMDP model is riskneutral in that it assumes that the agent is maximizing the expected reward of its actions. In contrast, in domains like financial planning, it is often required that the agent decisions are risk-sensitive (maximize the utility of agent actions, for non-linear utility functions). Unfortunately, existing POMDP solvers cannot solve such planning problems exactly. By considering piecewise linear approximations of utility functions, this paper addresses this shortcoming in three contributions: (i) It defines the Risk-Sensitive POMDP model; (ii) It derives the fundamental properties of the underlying value functions and provides a functional value iteration technique to compute them exactly and (c) It proposes an efficient procedure to determine the dominated value functions, to speed up the algorithm. Our experiments show t...