We desert'be our latest attempt at adaptive language modeling. At the heart of our approachis a Maximum Entropy(ME) modelwhich inc.orlxnatesmanyknowledgesources in a consistentmanner. The othercomponentsareaselectiveunigramcache,aconditionalbigram cache,anda conventionalstatictrigram. Wedescribetheknowledge sourcesusedtobuildsucha modelwithARPA'sofficialWSJcorpus, and report on perplexity and word error rate results obtained with it. Then, threedifferentadaptationparadigmsare discussed,and an additional experiment, based on AP wiredata, is used to compare them.