Language models for speech recognition tend to be brittle across domains, since their performance is vulnerable to changes in the genre or topic of the text on which they are trained. A number of adaptation methods, exploring either lexical co-occurrence or topic cues, have been developed to mitigate this problem with varying degrees of success. In this paper, we study a novel use of relevance information for dynamic language model adaptation in speech recognition. It not only inherits the merits of several existing techniques but also provides a flexible but systematic way to render the lexical and topical relationships between a search history and an upcoming word. Empirical results on large vocabulary continuous speech recognition show that the methods deduced from our framework represent promising alternatives to the other existing language model adaptation methods compared in this paper.