Predictive state representations (PSRs) are models that represent the state of a dynamical system as a set of predictions about future events. The existing work with PSRs focuses on trying to learn exact models, an approach that cannot scale to complex dynamical systems. In contrast, our work takes the first steps in developing a theory of approximate PSRs. We examine the consequences of using an approximate predictive state representation, bounding the error of the approximate state under certain conditions. We also introduce factored PSRs, a class of PSRs with a particular approximate state representation. We show that the class of factored PSRs allow one to tune the degree of approximation by trading off accuracy for compactness. We demonstrate this trade-off empirically on some example systems, using factored PSRs that were learned from data. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning General Terms Theory Keywords predictive state representations,...
Britton Wolfe, Michael R. James, Satinder P. Singh