In this paper, a dynamic multi-modal fusion scheme for tracking multiple targets with Monte-Carlo filters is presented, with the goal of achieving robustness by combining complimentary likelihoods based on color and foreground segmentation. The generality of the proposed approach allows defining the measurements on different levels (pixel-, feature- and object-space) through dynamic data fusion. We demonstrate the approach in a people tracking context, by using a multi-target MCMC particle filter.