In a previous paper we proposed Web-based language models relying on the possibility theory. These models explicitly represent the possibility of word sequences. In this paper we propose to find the best way of combining this kind of model with classical probabilistic models, in the context of automatic speech recognition. We propose several combination approaches, depending on the nature of the combined models. With respect to the baseline, the best combination provides an absolute word error rate reduction of about 1% on broadcast news transcription, and of 3.5% on domain-specific multimedia document transcription.