In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones, a microphone array and visual information to structure meetings into segments. Within the DBN a multistream hidden Markov model (HMM) is coupled with a linear dynamical system (LDS) to compensate disturbances in the data. Thereby the HMM is used as driving input for the LDS. The model can handle noise and occlusions in all channels. Experimental results on real meeting data show that the new model is highly preferable to all single-stream approaches. Compared to a baseline multi-modal early fusion HMM, the new DBN is more than 2.5%, respectively