Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We prese...
Arun Venkatraman, Martial Hebert, J. Andrew Bagnel