Several classification scenarios employ multiple independently trained classifiers and the outputs of these classifiers need to be combined. However, since each of the trained classifiers exhibit different statistical characteristics, it is not appropriate to combine them using techniques that are blind to these differences. We propose a transform, dtransform, that transforms outputs of classifiers to approximate posterior probabilities, and caters to the statistical behavior of the classifier while doing so. The transformed outputs are now comparable, and can be combined using any of the classical combination rules. We show convincing results that demonstrate the effectiveness of the proposed transform in providing better estimates of the posterior probabilities as compared to standard transformations, as demonstrated by lower KL distance from the true distribution, higher classification accuracies and higher effectiveness of the standard classifier combination rules.