This work presents a lookahead-based exploration strategy for a model-based learning agent that enables exploration of the opponent's behavior during interaction in a multi-agent system. Instead of holding one model, the model-based agent maintains a mixed opponent model, a distribution over a set of models that reflects its uncertainty about the opponent's strategy. Every action is evaluated according to its long run contribution to the expected utility and to the knowledge regarding the opponent's strategy. We present an efficient algorithm that returns an almost optimal exploration strategy against a given mixed model, and a learning method for acquiring a mixed model consistent with the opponent's past behavior. We report experimental results in the Iterated Prisoner's Dilemma game that demonstrate the superiority of the lookahead-based exploration strategy over other exploration methods.