The large shape variability and partial occlusions challenge most object detection and tracking methods for nonrigid targets such as pedestrians. Single camera tracking is limited in the scope of its applications because of the limited field of view (FOV) of a camera. This initiates the need for a multiple-camera system for completely monitoring and tracking a target, especially in the presence of occlusion. When the object is viewed with multiple cameras, there is a fair chance that it is not occluded simultaneously in all the cameras. In this paper, we developed a method for the fusion of tracks obtained from two cameras placed at two different positions. First, the object to be tracked is identified on the basis of shape information measured by MPEG7 ART shape descriptor. After this, single camera tracking is performed by the unscented Kalman filter approach and finally the tracks from the two cameras are fused. A sensor network model is proposed to deal with the situations in ...
Manas Kamal Bhuyan, Brian C. Lovell, Abbas Bigdeli