In this paper, we propose an observable-area model of the scene for real-time cooperative object tracking by multiple cameras. The knowledge of partners’ abilities is necessary for cooperative action whatever task is defined. In particular, for the tracking a moving object in the scene, every Active Vision Agent (AVA), a rational model of the network-connected computer with an active camera, should therefore know the area in the scene that is observable by each AVA. Each AVA should then decide its target object and gazing direction taking into account other AVAs’ actions. To realize such a cooperative gazing, the system gathers all the observable-area information to incrementally generate the observable-area model at each frame during the tracking. Hence, the system cooperatively tracks the object by utilizing both the observable-area model and the object’s motion estimated at each frame. Experimental results demonstrate the effectiveness of the cooperation among the AVAs with ...