This paper deals with the interaction between progressive model adaptation and score normalization strategies which are used for reducing the variation in likelihood ratio scores in making speaker verification decisions. This issue is important in establishing robust decision thresholds for practical speaker verification systems. An adaptive score normalization method is proposed that is designed to reduce the drift in likelihood ratio scores that occurs when speaker models are adapted. This method is investigated and compared with other more well know score normalization methods in the context of a joint factor analysis speaker verification approach. All approaches are evaluated on the progressive adaptation track in the NIST 2005 text independent speaker verification evaluation plan.