Current video coding schemes employ motion compensation to exploit the fact that the signal forms an auto-regressive process along the motion trajectory, and remove temporal redundancies with prior reconstructed samples via prediction. However, the decoder may, in principle, also exploit correlations with received encoding information of future frames. In contrast to current decoders that reconstruct every block immediately as the corresponding quantization indices are available, we propose an estimation-theoretic delayed decoding scheme which leverages quantization and motion information of one or more future frames to refine the reconstruction of the current block. The scheme, implemented in the transform domain, efficiently combines all available (including future) information in an appropriately derived conditional pdf, to obtain the optimal delayed reconstruction of each transform coefficient in the frame. Experiments demonstrate substantial gains over the standard H.264 decoder...