5th International Symposium on Visual Computing, Las Vegas, Nevada, USA, Nov 30 - Dec 2, 2009 The goal of lossy image compression ought to be reducing entropy while preserving the perceptual quality of the image. Using gazetracked change detection experiments, we discover that human vision attends to one scale at a time. This evidence suggests that saliency should be treated on a per-scale basis, rather than aggregated into a single 2D map over all the scales. We develop a compression algorithm which adaptively reduces the entropy of the image according to its saliency map within each scale, using the Laplacian pyramid as both the multiscale decomposition and the saliency measure of the image. We finally return to psychophysics to evaluate our results. Surprisingly, images compressed using our method are sometimes judged to be better than the originals.
Stella X. Yu, Dimitri A. Lisin