One of the biggest challenges in speaker recognition is dealing with speaker-emotion variability. The basic problem is how to train the emotion GMMs of the speakers from their neutral speech and how to calculate the scores of the feature vectors against the emotion GMMs. In this paper, we present a new neutral-emotion GMM transformation algorithm to overcome this limitation. A transformation function based on polynomial function is learned to represent the relationship between the neutral and emotion GMM. It is adopted in testing to calculate the scores against the emotion GMM. The experiments carried on MASC show the performance is improved with an EER reduction of 39.5% from the baseline system.