Statistical language models play a major role in current speech recognition systems. Most of these models have focussed on relatively local interactions between words. Recently, however, there have been several attempts to incorporate other knowledge sources, in particular longer-range word dependencies, in order to improve speech recognizers. We will present one such method, which tries to automatically utilize properties of topic continuity. When a base-line speech recognition system generates alternative hypotheses for a sentence, we will utilize the word preferences based on topic coherence to select the best hypothesis. In our experiment,we achieved a 0.65% improvement in the word error rate on top of the base-line system. It corresponds to 10.40% of the possible word error improvement.