We consider an optimization problem in probabilistic inference: Given n hypotheses Hj, m possible observations Ok, their conditional probabilities pk j, and a particular Ok, select a possibly small subset of hypotheses excluding the true target only with some error probability . After specifying the optimization goal we show that this problem can be solved through a linear program in mn variables that indicate the probabilities to discard a hypothesis given an observation. Moreover, we can compute optimal strategies where only O(m+n) of these variables get fractional values. The manageable size of the linear programs and the mostly deterministic shape of optimal strategies makes the method practicable. We interpret the dual variables as worst-case distributions of hypotheses, and we point out some counterintuitive nonmonotonic behaviour of the variables as a function of the error bound . One of the open problems is the existence of a purely combinatorial algorithm that is faster than ...