We review current methods for evaluating models in the cognitive sciences, including theoretically-based approaches, such as Bayes Factors and MDL measures, simulation approaches, including model mimicry evaluations, and practical approaches, such as validation and generalization measures. We argue that, while often useful in specific settings, most of these approaches are limited in their ability to give a general assessment of models. We argue that hierarchical methods generally, and hierarchical Bayesian methods specifically, can provide a more thorough evaluation of models in the cognitive sciences. We present two worked examples of hierarchical Bayesian analyses, to demonstrate how the approach addresses key questions of descriptive adequacy, parameter interference, prediction, and generalization in principled and coherent ways.
Richard M. Shiffrin, Michael D. Lee, Woojae Kim, E