Bias/variance analysis is a useful tool for investigating the performance of machine learning algorithms. Conventional analysis decomposes loss into errors due to aspects of the learning process, but in relational domains, the inference process introduces an additional source of error. Collective inference techniques introduce additional error both through the use of approximate inference algorithms and through variation in the availability of test set information. To date, the impact of inference error on model performance has not been investigated. In this paper, we propose a new bias/variance framework that decomposes loss into errors due to both the learning and inference process. We evaluate performance of three relational models and show that (1) inference can be a significant source of error, and (2) the models exhibit different types of errors as data characteristics are varied. Key words: Statistical relational learning, collective inference, evaluation