ct 7 Discriminative training for hidden Markov models (HMMs) has been a central theme in speech recognition research for many years. 8 One most popular technique is minimum classification error (MCE) training, with the objective function closely related to the empirical 9 error rate and with the optimization method based traditionally on gradient descent. In this paper, we provide a new look at the MCE 10 technique in two ways. First, we develop a non-trivial framework in which the MCE objective function is re-formulated as a rational 11 function for multiple sentence-level training tokens. Second, using this novel re-formulation, we develop a new optimization method 12 for discriminatively estimating HMM parameters based on growth transformation or extended Baum–Welch algorithm. Technical 13 details are given for the use of lattices as a rich representation of competing candidates for the MCE training. 14 Ó 2007 Elsevier B.V. All rights reserved.