In Minimum Error Rate Training (MERT), BLEU is often used as the error function, despite the fact that it has been shown to have a lower correlation with human judgment than other metrics such as METEOR and TER. In this paper, we present empirical results in which parameters tuned on BLEU may lead to sub-optimal BLEU scores under certain data conditions. Such scores can be improved significantly by tuning on an entirely different metric altogether, e.g. METEOR, by .0082 BLEU or 3.38% relative improvement on the WMT08 English–French data. We analyze the influence of the number of references and choice of metrics on the result of MERT and experiment on different data sets. We show the problems of tuning on a metric that is not designed for the single reference scenario and point out some possible solutions.