We propose a new model of human concept learning that provides a rational analysis for learning of feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a "concept language" of logical rules. We compare the model predictions to human generalization judgments in two well-known category learning experiments, and find good agreement for both average and individual participants' generalizations.
Noah D. Goodman, Joshua B. Tenenbaum, Jacob Feldma