A novel algorithm for robustly segmenting changes between different images of a scene is presented. This computationally efficient algorithm is based on a non-linear comparison of gradient structure in overlapping imageregions and offers intrinsic invariance to changing illumination, without recourse to background-model adaptation. High accuracy is demonstrated on test video data with and without illumination changes. The technique is applicable to motion-segmentation as well as measuring longer-term object-changes.