A new method for object tracking in video sequences is presented. This method exploits the benefits of particle filters to tackle the multimodal distributions emerging from cluttered scenes. The tracked object is described by several models of different complexity, which are probabilistically linked together. The parameter update for each model takes place hierarchically so that the simpler models, which are updated first, can guide the search in the parameter space of the more complex models to relevant regions. This strategy improves the target representation because of the multiple models and reduces the overall complexity. The likelihood for each object model is calculated using one or more visual cues thus increasing the robustness of the proposed algorithm. Our method is evaluated by fusing on salient points and contour models and we demonstrate its effectiveness.
Alexandros Makris, Dimitrios I. Kosmopoulos, Stavr