For combining classifiers at measurement level, the diverse outputs of classifiers should be transformed to uniform measures that represent the confidence of decision, hopefully, the class probability or likelihood. This paper presents our experimental results of classifier combination using confidence evaluation. We test three types of confidences: log-likelihood, exponential and sigmoid. For re-scaling the classifier outputs, we use three scaling functions based on global normalization and Gaussian density estimation. Experimental results in handwritten digit recognition show that via confidence evaluation, superior classification performance can be obtained using simple combination rules.