We consider the problem of image classification when more than one visual feature is available. In such cases, Bayes fusion offers an attractive solution by combining the results of different classifiers (one classifier per feature). This is the general form of the so-called ‘‘naive Bayes’’ approach. This paper compares the performance of Bayes fusion with respect to Bayesian classification, which is based the joint feature distribution. It is well-known that the latter has lower bias than the former, unless the features are conditionally independent, in which case the two coincide. However, as originally noted by Friedman, the low variance associated with naive Bayes estimation may mitigate the effect of its inherent bias. Indeed, in the case of small training samples, naive Bayes may outperform Bayes classification in terms of error rate. The contribution of this paper is threefold. First, we present a detailed analysis of the error rate of Bayes fusion assuming that...