Determining if a solution is optimal or near optimal is fundamental in optimization theory, algorithms, and computation. For instance, Karush-Kuhn-Tucker conditions provide necessary and sufficient optimality conditions for certain classes of problems, and bounds on optimality gaps are frequently used as part of optimization algorithms. Such bounds are obtained through Lagrangian, integrality, or semidefinite programming relaxations. An alternative approach in stochastic programming is to use Monte Carlo sampling-based estimators on the optimality gap. In this tutorial, we present a simple, easily implemented procedure that forms a point and interval estimator on the optimality gap of a given candidate solution. We then discuss methods to reduce the computational effort, bias, and variance of our simplest estimator. We also provide a framework that allows the use these optimality gap estimators in an algorithmic way by providing rules to iteratively increase the sample sizes and to ter...
Güzin Bayraksan, David P. Morton