Color is of interest to those working in computer vision largely because it is assumed to be helpful for recognition. This assumption has driven much work in color based image indexing, and computational color constancy. However, in many ways, indexing is a poor model for recognition. In this paper we use a recently developed statistical model of recognition which learns to link image region features with words, based on a large unstructured data set. The system is general in that it learns what is recognizable given the data. It also supports a principled testing paradigm which we exploit here to evaluate the use of color. In particular, we look at color space choice, degradation due to illumination change, and dealing with this degradation. We evaluate two general approaches to dealing with this color constancy problem. Specifically we address whether it is better to build color variation due to illumination into a recognition system, or, instead, apply color constancy preprocessing...