Motion compensation with redundant-wavelet multihypothesis, in which multiple predictions that are diverse in transform phase contribute to a single motion estimate, is deployed into the fully scalable MC-EZBC video coder. The bidirectional motion-compensated temporal-filtering process of MC-EZBC is adapted to the redundant-wavelet domain, wherein transform redundancy is exploited to generate a phase-diverse multihypothesis prediction of the true temporal filtering. Noise not captured by the motion model is substantially reduced, leading to greater coding efficiency. In experimental results, the proposed system exhibits substantial gains in rate-distortion performance over the original MC-EZBC coder for sequences with fast or complex motion.
Joseph B. Boettcher, James E. Fowler