In this paper, we present an algorithm using dynamic texture synthesis for closed-loop video coding. Video textures, or so-called dynamic textures are video sequences with moving texture showing some stationarity properties over time, like water surfaces, whirlwind, clouds, crowds, or even parts of head-and-shoulder scenes. By learning the temporal statistics of such content, we can in principle synthesize the corresponding areas in future frames of the video. In this paper we show that this synthesized image content can also be used for prediction in a closed-loop hybrid video coding system, where the encoder decides about usage of such synthesized content and possible transmission of a residual error signal. This is done in an adaptive and rate-distortion optimized way, such that higher compression performance can be achieved for both high and low bitrates. We show that local adaptation of the algorithm can lead to better compression performance and reduce the computation complexity...