We present a new method for carrying out state estimation in multiagent settings that are characterized by continuous or large discrete state spaces. State estimation in multiagent settings involves updating an agent’s belief over the physical states and the space of other agents’ models. We factor out the models of the other agents and update the agent’s belief over these models, as exactly as possible. Simultaneously, we sample particles from the distribution over the large physical state space and project the particles in time. Performance of the previous approach – the interactive particle filter – degrades in settings with large state spaces because it distributes the particles over both, the physical state space and the other agents’ models. A comparative analysis on two problem domains demonstrates that our approach achieves a significantly improved estimation accuracy and is computationally less expensive.