The performance of any word recognizer depends on the lexicon presented. Usually large lexicons or lexicons containing similar entries pose greater difficulty for recognizers. However, the literature lacks any quantitative methodology of capturing the precise dependence between word recognizers and lexicons. This paper presents a model that statistically "discovers" the relation between a word recognizer and the lexicon. It uses model parameters that capture a recognizer's ability of distinguishing characters (of the alphabet) and its sensitivity to lexicon size. Such a model is very useful in comparing word recognizers by predicting their performance based on the lexicon presented. We demonstrate the accuracy of our model with extensive experiments on five different word recognizers, thousands of images, and tens of lexicons. 1