In this paper we propose a novel, adaptive method for motion compensated 3D wavelet transformation (MC 3D-DWT) of video. The proposed method overcomes problems of ghosting and non-aligned aliasing artifacts which can arise in regions of motion model failure, when the video is reconstructed at reduced temporal or spatial resolutions. Previous MC 3D-DWT structures either take the form of a MC temporal DWT followed by a spatial transform ("t+2D"), or perform the spatial transform rst, limiting the spatial frequencies which can be jointly compensated in the temporal transform, and hence limiting the compression efciency. Essentially, the proposed transform continuously adapts itself between these two extremes, based on information available within the compressed bit-stream. Experimental results indicate that the proposed adaptive transform signicantly reduces the cost in compression efciency required to achieve high quality spatial and temporal scalability.