We present a methodology to analyze Multiple Classifiers Systems (MCS) performance, using the disagreement concept. The goal is to define an alternative approach to the conventional recognition rate criterion, which usually requires an exhaustive combination search. This approach defines a Distance-based Disagreement (DbD) measure using an Euclidean distance computed between confusion matrices and a soft-correlation rule to indicate the most likely candidates to the best classifiers ensemble. As case study, we apply this strategy to two different handwritten recognition systems. Experimental results indicate that the method proposed can be used as a low-cost alternative to conventional approaches.