Abstract: We consider lossy source coding when side information affecting the distortion measure may be available at the encoder, decoder, both, or neither. For example, such distortion side information can model reliabilities for noisy measurements, sensor calibration information, or perceptual effects like masking and sensitivity to context. When the distortion side information is statistically independent of the source, we show that in many cases (e.g., for additive or multiplicative distortion side information) there is no penalty for knowing the side information only at the encoder, and there is no advantage to knowing it at the decoder. Furthermore, for quadratic distortion measures scaled by the distortion side information, we evaluate the penalty for lack of encoder knowledge and show that it can be arbitrarily large. In this scenario, we also sketch transform based quantizers constructions which efficiently exploit encoder side information in the high-resolution limit.
Emin Martinian, Gregory W. Wornell, Ram Zamir