: For an infinite-horizon optimal control problem, the cost does not, in general, converge. The classical work-around to this problem is to introduce a discount or "forgetting" factor, diminishing the importance of future cost. This will, however, affect the computed solution. Here, a method is presented whereby a Hamilton-Jacobi-Bellman equation can be solved without the use of a discount factor. The HJB equation is reformulated as an eigenvalue problem, such that the principal eigenvalue corresponds to the expected cost per unit time, and the corresponding eigenfunction gives the value function (up to an additive constant) for the optimal control policy. For a certain (relevant) class of problems, the eigenvalue problem is linear, making it possible to find a numeric solution very quickly.