The paper addresses the problem of improving the MPEG compression of synthetic video sequences by exploiting the knowledge about the original 3D model. Two techniques are proposed for the specific case of a virtual walkthrough in which the point of view is the unique moving object in the scene. Technique 1 consists of using only P?frames when position and direction of the point of view do not change since, in this case, each frame is equal to the previous one; P?frames can be simply repeated without any encoding effort thus reducing the computational complexity. Technique 2 consists of increasing the quantization parameter when the direction of the point of view is changing, since the resulting increase of distortion is not perceived clearly for fast?moving objects because of the temporal masking effect. Experimental results compared with model?unaware encoding shows that Technique 1 reduces the bitstream size by about 9% without any appreciable decrease of perceptual quality while CP...