Explanations are a technique for reasoning about constraint propagation, which have been applied in many learning, backjumping and user-interaction algorithms for constraint programming. To date explanations for constraints have usually been recorded eagerly when constraint propagation happens, which leads to inecient use of time and space, because many will never be used. In this paper we show that it is possible and highly eective to calculate explanations retrospectively when they are needed. To this end, we implement lazy explanations in a state of the art learning framework. Experimental results conrm the eectiveness of the technique: we achieve reduction in the number of explanations calculated up to a factor of 200 and reductions in overall solve time up to a factor of 5. Key words: constraint programming, explanations, learning
Ian P. Gent, Ian Miguel, Neil C. A. Moore