Incomplete decision algorithms can often solve larger problem instances than complete ones. The drawback is that one does not know whether the algorithm will finish soon, later, or never. This paper presents a general decision-theoretic method for optimally terminating such algorithms. The stopping policy is computed based on a prior probability of the answer, a payoff model describing the value that different probability estimates would provide at different times, and the algorithm’s runtime distribution. We present a linear-time algorithm for determining the optimal stopping policy given a finite cap on the number of algorithm steps. We exemplify this in a manufacturing scenario with a 3-satisfiability problem. To increase accuracy, the initial satisfiability probability and the run-time distribution are conditioned on features of the instance. The expectation of the result at each future time step is computed using Bayesian updating. We then extend the framework to settings w...